Test Report: Docker_Linux_crio 22332

                    
                      56e1ce855180c73f84c0d958e6323d58f60b3065:2025-12-27:43013
                    
                

Test fail (26/332)

x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable volcano --alsologtostderr -v=1: exit status 11 (237.53645ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:56:41.045994   23648 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:56:41.046269   23648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:56:41.046277   23648 out.go:374] Setting ErrFile to fd 2...
	I1227 19:56:41.046281   23648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:56:41.046496   23648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:56:41.046730   23648 mustload.go:66] Loading cluster: addons-416077
	I1227 19:56:41.047048   23648 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:41.047066   23648 addons.go:622] checking whether the cluster is paused
	I1227 19:56:41.047152   23648 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:41.047175   23648 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:56:41.047527   23648 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:56:41.066434   23648 ssh_runner.go:195] Run: systemctl --version
	I1227 19:56:41.066494   23648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:56:41.084212   23648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:56:41.172956   23648 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:56:41.173029   23648 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:56:41.200061   23648 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:56:41.200088   23648 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:56:41.200094   23648 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:56:41.200099   23648 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:56:41.200103   23648 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:56:41.200108   23648 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:56:41.200112   23648 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:56:41.200116   23648 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:56:41.200121   23648 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:56:41.200128   23648 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:56:41.200132   23648 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:56:41.200138   23648 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:56:41.200142   23648 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:56:41.200148   23648 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:56:41.200155   23648 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:56:41.200167   23648 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:56:41.200171   23648 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:56:41.200177   23648 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:56:41.200182   23648 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:56:41.200192   23648 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:56:41.200198   23648 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:56:41.200213   23648 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:56:41.200218   23648 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:56:41.200225   23648 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:56:41.200231   23648 cri.go:96] found id: ""
	I1227 19:56:41.200275   23648 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:56:41.213870   23648 out.go:203] 
	W1227 19:56:41.214944   23648 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:56:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:56:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:56:41.214958   23648 out.go:285] * 
	* 
	W1227 19:56:41.215637   23648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:56:41.216756   23648 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.606096ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-98b8s" [4ab3c6c7-add5-435f-98d6-6f17591d3018] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003569468s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-k2x7x" [c8b8d81c-61d1-4f10-b00a-9e224d8314a9] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002395813s
addons_test.go:394: (dbg) Run:  kubectl --context addons-416077 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-416077 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-416077 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.396355545s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 ip
2025/12/27 19:57:01 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable registry --alsologtostderr -v=1: exit status 11 (227.49492ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:01.612926   26485 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:01.613215   26485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:01.613226   26485 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:01.613230   26485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:01.613424   26485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:01.613680   26485 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:01.613969   26485 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:01.613987   26485 addons.go:622] checking whether the cluster is paused
	I1227 19:57:01.614067   26485 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:01.614079   26485 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:01.614407   26485 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:01.631176   26485 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:01.631218   26485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:01.647547   26485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:01.736068   26485 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:01.736154   26485 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:01.765941   26485 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:01.765975   26485 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:01.765979   26485 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:01.765982   26485 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:01.765986   26485 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:01.765990   26485 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:01.765993   26485 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:01.765996   26485 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:01.765998   26485 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:01.766008   26485 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:01.766011   26485 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:01.766014   26485 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:01.766017   26485 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:01.766019   26485 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:01.766062   26485 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:01.766077   26485 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:01.766082   26485 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:01.766086   26485 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:01.766091   26485 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:01.766094   26485 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:01.766097   26485 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:01.766100   26485 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:01.766105   26485 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:01.766109   26485 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:01.766111   26485 cri.go:96] found id: ""
	I1227 19:57:01.766157   26485 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:01.780315   26485 out.go:203] 
	W1227 19:57:01.781557   26485 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:01.781585   26485 out.go:285] * 
	* 
	W1227 19:57:01.782471   26485 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:01.783500   26485 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (12.82s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.838021ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-416077
addons_test.go:334: (dbg) Run:  kubectl --context addons-416077 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (245.739796ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:07.026367   26853 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:07.026509   26853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:07.026518   26853 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:07.026522   26853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:07.026700   26853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:07.026974   26853 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:07.028146   26853 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:07.028238   26853 addons.go:622] checking whether the cluster is paused
	I1227 19:57:07.028584   26853 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:07.028626   26853 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:07.029227   26853 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:07.048889   26853 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:07.049006   26853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:07.070300   26853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:07.166726   26853 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:07.166805   26853 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:07.196470   26853 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:07.196495   26853 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:07.196501   26853 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:07.196506   26853 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:07.196510   26853 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:07.196515   26853 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:07.196520   26853 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:07.196524   26853 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:07.196527   26853 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:07.196536   26853 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:07.196540   26853 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:07.196544   26853 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:07.196549   26853 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:07.196553   26853 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:07.196559   26853 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:07.196567   26853 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:07.196572   26853 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:07.196577   26853 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:07.196580   26853 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:07.196585   26853 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:07.196592   26853 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:07.196598   26853 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:07.196603   26853 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:07.196608   26853 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:07.196614   26853 cri.go:96] found id: ""
	I1227 19:57:07.196670   26853 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:07.211407   26853 out.go:203] 
	W1227 19:57:07.212583   26853 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:07.212611   26853 out.go:285] * 
	* 
	W1227 19:57:07.213360   26853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:07.214468   26853 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-416077 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:211: (dbg) Done: kubectl --context addons-416077 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.324606486s)
addons_test.go:236: (dbg) Run:  kubectl --context addons-416077 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-416077 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [977d9c50-5aa5-413a-8944-2ef6ac03b172] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [977d9c50-5aa5-413a-8944-2ef6ac03b172] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002843028s
I1227 19:56:59.567233   14427 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-416077 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (233.317052ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:00.440721   26203 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:00.440904   26203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:00.440925   26203 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:00.440929   26203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:00.441100   26203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:00.441343   26203 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:00.441605   26203 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:00.441622   26203 addons.go:622] checking whether the cluster is paused
	I1227 19:57:00.441696   26203 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:00.441711   26203 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:00.442077   26203 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:00.460094   26203 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:00.460156   26203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:00.480513   26203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:00.570474   26203 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:00.570596   26203 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:00.600963   26203 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:00.600991   26203 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:00.600995   26203 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:00.600998   26203 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:00.601000   26203 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:00.601006   26203 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:00.601009   26203 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:00.601011   26203 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:00.601014   26203 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:00.601021   26203 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:00.601028   26203 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:00.601031   26203 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:00.601034   26203 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:00.601037   26203 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:00.601040   26203 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:00.601050   26203 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:00.601054   26203 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:00.601058   26203 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:00.601062   26203 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:00.601065   26203 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:00.601070   26203 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:00.601076   26203 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:00.601078   26203 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:00.601081   26203 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:00.601084   26203 cri.go:96] found id: ""
	I1227 19:57:00.601122   26203 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:00.615001   26203 out.go:203] 
	W1227 19:57:00.616381   26203 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:00.616402   26203 out.go:285] * 
	* 
	W1227 19:57:00.617150   26203 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:00.618433   26203 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable ingress --alsologtostderr -v=1: exit status 11 (231.266682ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:00.680302   26293 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:00.680617   26293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:00.680628   26293 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:00.680632   26293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:00.680837   26293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:00.681127   26293 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:00.681426   26293 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:00.681444   26293 addons.go:622] checking whether the cluster is paused
	I1227 19:57:00.681527   26293 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:00.681538   26293 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:00.681880   26293 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:00.699106   26293 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:00.699151   26293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:00.718260   26293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:00.807146   26293 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:00.807235   26293 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:00.834250   26293 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:00.834271   26293 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:00.834276   26293 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:00.834281   26293 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:00.834286   26293 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:00.834291   26293 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:00.834296   26293 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:00.834300   26293 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:00.834305   26293 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:00.834311   26293 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:00.834315   26293 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:00.834319   26293 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:00.834324   26293 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:00.834333   26293 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:00.834337   26293 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:00.834345   26293 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:00.834349   26293 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:00.834352   26293 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:00.834356   26293 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:00.834360   26293 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:00.834365   26293 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:00.834374   26293 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:00.834385   26293 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:00.834393   26293 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:00.834398   26293 cri.go:96] found id: ""
	I1227 19:57:00.834447   26293 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:00.847342   26293 out.go:203] 
	W1227 19:57:00.848465   26293 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:00.848483   26293 out.go:285] * 
	* 
	W1227 19:57:00.849143   26293 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:00.850285   26293 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (11.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-xz7sz" [639cb2f6-4c44-43c5-92f4-919a4b72ad26] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003977042s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (248.508978ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:06.850995   26755 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:06.851298   26755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:06.851308   26755 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:06.851312   26755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:06.851503   26755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:06.851761   26755 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:06.852065   26755 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:06.852083   26755 addons.go:622] checking whether the cluster is paused
	I1227 19:57:06.852162   26755 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:06.852173   26755 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:06.852542   26755 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:06.872510   26755 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:06.872578   26755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:06.893671   26755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:06.987290   26755 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:06.987382   26755 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:07.019477   26755 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:07.019508   26755 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:07.019513   26755 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:07.019516   26755 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:07.019519   26755 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:07.019522   26755 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:07.019525   26755 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:07.019527   26755 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:07.019530   26755 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:07.019537   26755 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:07.019539   26755 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:07.019542   26755 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:07.019544   26755 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:07.019547   26755 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:07.019550   26755 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:07.019556   26755 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:07.019559   26755 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:07.019564   26755 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:07.019566   26755 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:07.019569   26755 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:07.019572   26755 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:07.019575   26755 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:07.019578   26755 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:07.019580   26755 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:07.019583   26755 cri.go:96] found id: ""
	I1227 19:57:07.019619   26755 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:07.033998   26755 out.go:203] 
	W1227 19:57:07.035053   26755 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:07.035072   26755 out.go:285] * 
	* 
	W1227 19:57:07.036045   26755 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:07.037236   26755 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.698078ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-m9rtg" [3d8a12cc-9912-493e-b158-f6a7a1c5a8bb] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003601803s
addons_test.go:465: (dbg) Run:  kubectl --context addons-416077 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (224.919437ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:56:54.080124   25379 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:56:54.080398   25379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:56:54.080407   25379 out.go:374] Setting ErrFile to fd 2...
	I1227 19:56:54.080411   25379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:56:54.080591   25379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:56:54.080834   25379 mustload.go:66] Loading cluster: addons-416077
	I1227 19:56:54.081145   25379 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:54.081165   25379 addons.go:622] checking whether the cluster is paused
	I1227 19:56:54.081253   25379 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:54.081265   25379 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:56:54.081602   25379 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:56:54.099228   25379 ssh_runner.go:195] Run: systemctl --version
	I1227 19:56:54.099282   25379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:56:54.117801   25379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:56:54.206290   25379 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:56:54.206354   25379 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:56:54.233404   25379 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:56:54.233422   25379 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:56:54.233426   25379 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:56:54.233429   25379 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:56:54.233432   25379 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:56:54.233434   25379 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:56:54.233437   25379 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:56:54.233440   25379 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:56:54.233442   25379 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:56:54.233447   25379 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:56:54.233450   25379 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:56:54.233454   25379 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:56:54.233460   25379 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:56:54.233465   25379 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:56:54.233469   25379 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:56:54.233480   25379 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:56:54.233485   25379 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:56:54.233491   25379 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:56:54.233495   25379 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:56:54.233500   25379 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:56:54.233504   25379 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:56:54.233507   25379 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:56:54.233510   25379 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:56:54.233513   25379 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:56:54.233517   25379 cri.go:96] found id: ""
	I1227 19:56:54.233554   25379 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:56:54.247131   25379 out.go:203] 
	W1227 19:56:54.248387   25379 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:56:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:56:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:56:54.248406   25379 out.go:285] * 
	* 
	W1227 19:56:54.249071   25379 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:56:54.250203   25379 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (29.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 19:56:51.375550   14427 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 19:56:51.378716   14427 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 19:56:51.378739   14427 kapi.go:107] duration metric: took 3.205894ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.216334ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-416077 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-416077 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [8229025d-decb-41f6-b0db-1086e903c7f6] Pending
helpers_test.go:353: "task-pv-pod" [8229025d-decb-41f6-b0db-1086e903c7f6] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.002393333s
addons_test.go:574: (dbg) Run:  kubectl --context addons-416077 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-416077 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-416077 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-416077 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-416077 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-416077 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-416077 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [97c5ef0d-c62d-4305-8107-d549af0d80a0] Pending
helpers_test.go:353: "task-pv-pod-restore" [97c5ef0d-c62d-4305-8107-d549af0d80a0] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003220382s
addons_test.go:616: (dbg) Run:  kubectl --context addons-416077 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-416077 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-416077 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (225.759747ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:20.852683   27820 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:20.852805   27820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:20.852813   27820 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:20.852817   27820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:20.853014   27820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:20.853247   27820 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:20.853569   27820 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:20.853587   27820 addons.go:622] checking whether the cluster is paused
	I1227 19:57:20.853690   27820 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:20.853704   27820 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:20.854061   27820 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:20.871172   27820 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:20.871218   27820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:20.887779   27820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:20.975905   27820 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:20.975990   27820 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:21.003965   27820 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:21.003992   27820 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:21.003996   27820 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:21.003999   27820 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:21.004002   27820 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:21.004012   27820 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:21.004015   27820 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:21.004018   27820 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:21.004021   27820 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:21.004029   27820 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:21.004035   27820 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:21.004038   27820 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:21.004041   27820 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:21.004046   27820 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:21.004049   27820 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:21.004063   27820 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:21.004067   27820 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:21.004072   27820 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:21.004075   27820 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:21.004077   27820 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:21.004083   27820 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:21.004088   27820 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:21.004091   27820 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:21.004094   27820 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:21.004097   27820 cri.go:96] found id: ""
	I1227 19:57:21.004142   27820 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:21.017825   27820 out.go:203] 
	W1227 19:57:21.019298   27820 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:21.019320   27820 out.go:285] * 
	* 
	W1227 19:57:21.020057   27820 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:21.021109   27820 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (225.767129ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:21.076654   27882 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:21.076929   27882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:21.076938   27882 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:21.076943   27882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:21.077113   27882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:21.077363   27882 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:21.077655   27882 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:21.077672   27882 addons.go:622] checking whether the cluster is paused
	I1227 19:57:21.077750   27882 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:21.077767   27882 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:21.078125   27882 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:21.095281   27882 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:21.095331   27882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:21.112012   27882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:21.200683   27882 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:21.200780   27882 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:21.230503   27882 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:21.230526   27882 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:21.230533   27882 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:21.230539   27882 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:21.230544   27882 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:21.230551   27882 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:21.230556   27882 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:21.230561   27882 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:21.230565   27882 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:21.230581   27882 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:21.230589   27882 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:21.230593   27882 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:21.230595   27882 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:21.230598   27882 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:21.230601   27882 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:21.230605   27882 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:21.230608   27882 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:21.230612   27882 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:21.230615   27882 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:21.230618   27882 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:21.230621   27882 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:21.230624   27882 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:21.230626   27882 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:21.230629   27882 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:21.230632   27882 cri.go:96] found id: ""
	I1227 19:57:21.230670   27882 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:21.243981   27882 out.go:203] 
	W1227 19:57:21.245097   27882 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:21.245111   27882 out.go:285] * 
	* 
	W1227 19:57:21.245798   27882 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:21.247016   27882 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (29.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-416077 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-416077 --alsologtostderr -v=1: exit status 11 (230.745947ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:56:49.022513   23984 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:56:49.022811   23984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:56:49.022822   23984 out.go:374] Setting ErrFile to fd 2...
	I1227 19:56:49.022827   23984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:56:49.023038   23984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:56:49.023293   23984 mustload.go:66] Loading cluster: addons-416077
	I1227 19:56:49.023633   23984 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:49.023660   23984 addons.go:622] checking whether the cluster is paused
	I1227 19:56:49.023791   23984 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:49.023810   23984 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:56:49.024385   23984 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:56:49.041207   23984 ssh_runner.go:195] Run: systemctl --version
	I1227 19:56:49.041249   23984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:56:49.057219   23984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:56:49.145099   23984 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:56:49.145181   23984 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:56:49.172612   23984 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:56:49.172643   23984 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:56:49.172650   23984 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:56:49.172655   23984 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:56:49.172658   23984 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:56:49.172662   23984 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:56:49.172665   23984 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:56:49.172668   23984 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:56:49.172671   23984 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:56:49.172682   23984 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:56:49.172688   23984 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:56:49.172691   23984 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:56:49.172694   23984 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:56:49.172697   23984 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:56:49.172700   23984 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:56:49.172710   23984 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:56:49.172713   23984 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:56:49.172717   23984 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:56:49.172720   23984 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:56:49.172722   23984 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:56:49.172725   23984 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:56:49.172727   23984 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:56:49.172730   23984 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:56:49.172733   23984 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:56:49.172736   23984 cri.go:96] found id: ""
	I1227 19:56:49.172788   23984 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:56:49.186431   23984 out.go:203] 
	W1227 19:56:49.187716   23984 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:56:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:56:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:56:49.187737   23984 out.go:285] * 
	* 
	W1227 19:56:49.188381   23984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:56:49.189778   23984 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-416077 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-416077
helpers_test.go:244: (dbg) docker inspect addons-416077:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5472080ebb3df7630c2a0e23b52fdb07701a9c956f1bd39b4d6a03abc45f7992",
	        "Created": "2025-12-27T19:55:31.14976836Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16420,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T19:55:31.181046746Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/5472080ebb3df7630c2a0e23b52fdb07701a9c956f1bd39b4d6a03abc45f7992/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5472080ebb3df7630c2a0e23b52fdb07701a9c956f1bd39b4d6a03abc45f7992/hostname",
	        "HostsPath": "/var/lib/docker/containers/5472080ebb3df7630c2a0e23b52fdb07701a9c956f1bd39b4d6a03abc45f7992/hosts",
	        "LogPath": "/var/lib/docker/containers/5472080ebb3df7630c2a0e23b52fdb07701a9c956f1bd39b4d6a03abc45f7992/5472080ebb3df7630c2a0e23b52fdb07701a9c956f1bd39b4d6a03abc45f7992-json.log",
	        "Name": "/addons-416077",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-416077:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-416077",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5472080ebb3df7630c2a0e23b52fdb07701a9c956f1bd39b4d6a03abc45f7992",
	                "LowerDir": "/var/lib/docker/overlay2/d827799f98381d526d61e9fdec170094222525a60c48b1d6d2a649ac0a9b2550-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d827799f98381d526d61e9fdec170094222525a60c48b1d6d2a649ac0a9b2550/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d827799f98381d526d61e9fdec170094222525a60c48b1d6d2a649ac0a9b2550/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d827799f98381d526d61e9fdec170094222525a60c48b1d6d2a649ac0a9b2550/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-416077",
	                "Source": "/var/lib/docker/volumes/addons-416077/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-416077",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-416077",
	                "name.minikube.sigs.k8s.io": "addons-416077",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cfcf5049e188ffa6d3a4e0bdb128eaf58f084a5ce8c518a3fa4efa28c340a244",
	            "SandboxKey": "/var/run/docker/netns/cfcf5049e188",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-416077": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5b385a256b6f9fc6d085bb8b995dfd1fc7cae5fa383f338c31a5e57affeb2bdc",
	                    "EndpointID": "27ecb885ce77e6b720f4051a7b993edde616378356a1f5112e68be5f6787cec7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "96:2d:c5:f5:bb:44",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-416077",
	                        "5472080ebb3d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-416077 -n addons-416077
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-416077 logs -n 25: (1.077345976s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-888117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-888117   │ jenkins │ v1.37.0 │ 27 Dec 25 19:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-888117                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-888117   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ -o=json --download-only -p download-only-695376 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-695376   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-695376                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-695376   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-888117                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-888117   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-695376                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-695376   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ --download-only -p download-docker-016221 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-016221 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ -p download-docker-016221                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-016221 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ --download-only -p binary-mirror-353868 --alsologtostderr --binary-mirror http://127.0.0.1:46245 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-353868   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ -p binary-mirror-353868                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-353868   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ addons  │ disable dashboard -p addons-416077                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-416077          │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-416077                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-416077          │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ start   │ -p addons-416077 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-416077          │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:56 UTC │
	│ addons  │ addons-416077 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-416077          │ jenkins │ v1.37.0 │ 27 Dec 25 19:56 UTC │                     │
	│ addons  │ addons-416077 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-416077          │ jenkins │ v1.37.0 │ 27 Dec 25 19:56 UTC │                     │
	│ addons  │ enable headlamp -p addons-416077 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-416077          │ jenkins │ v1.37.0 │ 27 Dec 25 19:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:08.800223   15770 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:55:08.800868   15770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:08.800879   15770 out.go:374] Setting ErrFile to fd 2...
	I1227 19:55:08.800883   15770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:08.801081   15770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:55:08.801572   15770 out.go:368] Setting JSON to false
	I1227 19:55:08.802319   15770 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2258,"bootTime":1766863051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 19:55:08.802371   15770 start.go:143] virtualization: kvm guest
	I1227 19:55:08.803961   15770 out.go:179] * [addons-416077] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 19:55:08.804940   15770 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 19:55:08.804953   15770 notify.go:221] Checking for updates...
	I1227 19:55:08.806787   15770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:55:08.807855   15770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 19:55:08.808800   15770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 19:55:08.809701   15770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 19:55:08.810697   15770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 19:55:08.811852   15770 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:08.833200   15770 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 19:55:08.833347   15770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:08.888587   15770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-27 19:55:08.879115382 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 19:55:08.888686   15770 docker.go:319] overlay module found
	I1227 19:55:08.890184   15770 out.go:179] * Using the docker driver based on user configuration
	I1227 19:55:08.891171   15770 start.go:309] selected driver: docker
	I1227 19:55:08.891184   15770 start.go:928] validating driver "docker" against <nil>
	I1227 19:55:08.891195   15770 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 19:55:08.891750   15770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:08.941134   15770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-27 19:55:08.932393467 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 19:55:08.941270   15770 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:08.941474   15770 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 19:55:08.942829   15770 out.go:179] * Using Docker driver with root privileges
	I1227 19:55:08.943817   15770 cni.go:84] Creating CNI manager for ""
	I1227 19:55:08.943877   15770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 19:55:08.943887   15770 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 19:55:08.943960   15770 start.go:353] cluster config:
	{Name:addons-416077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-416077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I1227 19:55:08.944961   15770 out.go:179] * Starting "addons-416077" primary control-plane node in "addons-416077" cluster
	I1227 19:55:08.946014   15770 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 19:55:08.947137   15770 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 19:55:08.948055   15770 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 19:55:08.948081   15770 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 19:55:08.948087   15770 cache.go:65] Caching tarball of preloaded images
	I1227 19:55:08.948146   15770 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 19:55:08.948147   15770 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 19:55:08.948156   15770 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 19:55:08.948450   15770 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/config.json ...
	I1227 19:55:08.948474   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/config.json: {Name:mk21379ccbb9df88e8842df4b6816db9d51faec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:08.963168   15770 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 19:55:08.963282   15770 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 19:55:08.963300   15770 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory, skipping pull
	I1227 19:55:08.963306   15770 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in cache, skipping pull
	I1227 19:55:08.963315   15770 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a as a tarball
	I1227 19:55:08.963325   15770 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from local cache
	I1227 19:55:21.602789   15770 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from cached tarball
	I1227 19:55:21.602834   15770 cache.go:243] Successfully downloaded all kic artifacts
	I1227 19:55:21.602876   15770 start.go:360] acquireMachinesLock for addons-416077: {Name:mkf9c1185db985a1ff75cdf8b45773f7382a60e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 19:55:21.602984   15770 start.go:364] duration metric: took 89.767µs to acquireMachinesLock for "addons-416077"
	I1227 19:55:21.603006   15770 start.go:93] Provisioning new machine with config: &{Name:addons-416077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-416077 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 19:55:21.603062   15770 start.go:125] createHost starting for "" (driver="docker")
	I1227 19:55:21.604612   15770 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1227 19:55:21.604813   15770 start.go:159] libmachine.API.Create for "addons-416077" (driver="docker")
	I1227 19:55:21.604847   15770 client.go:173] LocalClient.Create starting
	I1227 19:55:21.604947   15770 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem
	I1227 19:55:21.705826   15770 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem
	I1227 19:55:21.773365   15770 cli_runner.go:164] Run: docker network inspect addons-416077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 19:55:21.790021   15770 cli_runner.go:211] docker network inspect addons-416077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 19:55:21.790097   15770 network_create.go:284] running [docker network inspect addons-416077] to gather additional debugging logs...
	I1227 19:55:21.790117   15770 cli_runner.go:164] Run: docker network inspect addons-416077
	W1227 19:55:21.805338   15770 cli_runner.go:211] docker network inspect addons-416077 returned with exit code 1
	I1227 19:55:21.805362   15770 network_create.go:287] error running [docker network inspect addons-416077]: docker network inspect addons-416077: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-416077 not found
	I1227 19:55:21.805371   15770 network_create.go:289] output of [docker network inspect addons-416077]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-416077 not found
	
	** /stderr **
	I1227 19:55:21.805479   15770 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 19:55:21.821724   15770 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00201c360}
	I1227 19:55:21.821759   15770 network_create.go:124] attempt to create docker network addons-416077 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1227 19:55:21.821797   15770 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-416077 addons-416077
	I1227 19:55:21.865431   15770 network_create.go:108] docker network addons-416077 192.168.49.0/24 created
	I1227 19:55:21.865462   15770 kic.go:121] calculated static IP "192.168.49.2" for the "addons-416077" container
	I1227 19:55:21.865518   15770 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 19:55:21.880612   15770 cli_runner.go:164] Run: docker volume create addons-416077 --label name.minikube.sigs.k8s.io=addons-416077 --label created_by.minikube.sigs.k8s.io=true
	I1227 19:55:21.897485   15770 oci.go:103] Successfully created a docker volume addons-416077
	I1227 19:55:21.897563   15770 cli_runner.go:164] Run: docker run --rm --name addons-416077-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-416077 --entrypoint /usr/bin/test -v addons-416077:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 19:55:27.422277   15770 cli_runner.go:217] Completed: docker run --rm --name addons-416077-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-416077 --entrypoint /usr/bin/test -v addons-416077:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (5.524666574s)
	I1227 19:55:27.422306   15770 oci.go:107] Successfully prepared a docker volume addons-416077
	I1227 19:55:27.422377   15770 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 19:55:27.422394   15770 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 19:55:27.422445   15770 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-416077:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 19:55:31.084841   15770 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-416077:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.662352109s)
	I1227 19:55:31.084890   15770 kic.go:203] duration metric: took 3.662477517s to extract preloaded images to volume ...
	W1227 19:55:31.084992   15770 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 19:55:31.085027   15770 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 19:55:31.085071   15770 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 19:55:31.134708   15770 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-416077 --name addons-416077 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-416077 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-416077 --network addons-416077 --ip 192.168.49.2 --volume addons-416077:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 19:55:31.419115   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Running}}
	I1227 19:55:31.436388   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:31.453987   15770 cli_runner.go:164] Run: docker exec addons-416077 stat /var/lib/dpkg/alternatives/iptables
	I1227 19:55:31.504812   15770 oci.go:144] the created container "addons-416077" has a running status.
	I1227 19:55:31.504849   15770 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa...
	I1227 19:55:31.535140   15770 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 19:55:31.563548   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:31.580128   15770 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 19:55:31.580146   15770 kic_runner.go:114] Args: [docker exec --privileged addons-416077 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 19:55:31.616596   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:31.637512   15770 machine.go:94] provisionDockerMachine start ...
	I1227 19:55:31.637602   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:31.655949   15770 main.go:144] libmachine: Using SSH client type: native
	I1227 19:55:31.656252   15770 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1227 19:55:31.656274   15770 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 19:55:31.657637   15770 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39462->127.0.0.1:32768: read: connection reset by peer
	I1227 19:55:34.775396   15770 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-416077
	
	I1227 19:55:34.775420   15770 ubuntu.go:182] provisioning hostname "addons-416077"
	I1227 19:55:34.775472   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:34.792381   15770 main.go:144] libmachine: Using SSH client type: native
	I1227 19:55:34.792649   15770 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1227 19:55:34.792672   15770 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-416077 && echo "addons-416077" | sudo tee /etc/hostname
	I1227 19:55:34.918994   15770 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-416077
	
	I1227 19:55:34.919063   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:34.936298   15770 main.go:144] libmachine: Using SSH client type: native
	I1227 19:55:34.936499   15770 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1227 19:55:34.936514   15770 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-416077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-416077/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-416077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 19:55:35.054813   15770 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 19:55:35.054838   15770 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 19:55:35.054876   15770 ubuntu.go:190] setting up certificates
	I1227 19:55:35.054897   15770 provision.go:84] configureAuth start
	I1227 19:55:35.054968   15770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-416077
	I1227 19:55:35.071878   15770 provision.go:143] copyHostCerts
	I1227 19:55:35.071961   15770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 19:55:35.072106   15770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 19:55:35.072208   15770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 19:55:35.072286   15770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.addons-416077 san=[127.0.0.1 192.168.49.2 addons-416077 localhost minikube]
	I1227 19:55:35.103907   15770 provision.go:177] copyRemoteCerts
	I1227 19:55:35.103962   15770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 19:55:35.104012   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:35.120072   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:35.208450   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 19:55:35.225854   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 19:55:35.241485   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 19:55:35.257307   15770 provision.go:87] duration metric: took 202.389569ms to configureAuth
	I1227 19:55:35.257325   15770 ubuntu.go:206] setting minikube options for container-runtime
	I1227 19:55:35.257476   15770 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:55:35.257559   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:35.274748   15770 main.go:144] libmachine: Using SSH client type: native
	I1227 19:55:35.274989   15770 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1227 19:55:35.275005   15770 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 19:55:35.524033   15770 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 19:55:35.524058   15770 machine.go:97] duration metric: took 3.88652072s to provisionDockerMachine
	I1227 19:55:35.524070   15770 client.go:176] duration metric: took 13.919213948s to LocalClient.Create
	I1227 19:55:35.524090   15770 start.go:167] duration metric: took 13.919276771s to libmachine.API.Create "addons-416077"
	I1227 19:55:35.524100   15770 start.go:293] postStartSetup for "addons-416077" (driver="docker")
	I1227 19:55:35.524109   15770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 19:55:35.524155   15770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 19:55:35.524193   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:35.540246   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:35.629599   15770 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 19:55:35.633187   15770 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 19:55:35.633219   15770 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 19:55:35.633229   15770 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 19:55:35.633278   15770 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 19:55:35.633303   15770 start.go:296] duration metric: took 109.197754ms for postStartSetup
	I1227 19:55:35.633566   15770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-416077
	I1227 19:55:35.650240   15770 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/config.json ...
	I1227 19:55:35.650477   15770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 19:55:35.650517   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:35.665625   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:35.750258   15770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 19:55:35.754232   15770 start.go:128] duration metric: took 14.151148696s to createHost
	I1227 19:55:35.754257   15770 start.go:83] releasing machines lock for "addons-416077", held for 14.151260482s
	I1227 19:55:35.754314   15770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-416077
	I1227 19:55:35.770222   15770 ssh_runner.go:195] Run: cat /version.json
	I1227 19:55:35.770261   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:35.770284   15770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 19:55:35.770361   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:35.788856   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:35.789540   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:35.926707   15770 ssh_runner.go:195] Run: systemctl --version
	I1227 19:55:35.932472   15770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 19:55:35.963887   15770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 19:55:35.967987   15770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 19:55:35.968034   15770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 19:55:35.991601   15770 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 19:55:35.991625   15770 start.go:496] detecting cgroup driver to use...
	I1227 19:55:35.991652   15770 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 19:55:35.991699   15770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 19:55:36.006058   15770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 19:55:36.017078   15770 docker.go:218] disabling cri-docker service (if available) ...
	I1227 19:55:36.017115   15770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 19:55:36.032083   15770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 19:55:36.047815   15770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 19:55:36.124128   15770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 19:55:36.210927   15770 docker.go:234] disabling docker service ...
	I1227 19:55:36.210980   15770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 19:55:36.227790   15770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 19:55:36.238956   15770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 19:55:36.321424   15770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 19:55:36.404968   15770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 19:55:36.416090   15770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 19:55:36.428531   15770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 19:55:36.428581   15770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:55:36.437700   15770 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 19:55:36.437755   15770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:55:36.445657   15770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:55:36.453287   15770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:55:36.461154   15770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 19:55:36.468482   15770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:55:36.476786   15770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:55:36.489623   15770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:55:36.498025   15770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 19:55:36.504861   15770 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1227 19:55:36.504925   15770 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1227 19:55:36.515986   15770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 19:55:36.523156   15770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 19:55:36.602384   15770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 19:55:36.733771   15770 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 19:55:36.733841   15770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 19:55:36.737432   15770 start.go:574] Will wait 60s for crictl version
	I1227 19:55:36.737518   15770 ssh_runner.go:195] Run: which crictl
	I1227 19:55:36.740757   15770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 19:55:36.764592   15770 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 19:55:36.764706   15770 ssh_runner.go:195] Run: crio --version
	I1227 19:55:36.790026   15770 ssh_runner.go:195] Run: crio --version
	I1227 19:55:36.817056   15770 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 19:55:36.818007   15770 cli_runner.go:164] Run: docker network inspect addons-416077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 19:55:36.833987   15770 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 19:55:36.837587   15770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 19:55:36.847178   15770 kubeadm.go:884] updating cluster {Name:addons-416077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-416077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 19:55:36.847296   15770 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 19:55:36.847343   15770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 19:55:36.877327   15770 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 19:55:36.877347   15770 crio.go:433] Images already preloaded, skipping extraction
	I1227 19:55:36.877385   15770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 19:55:36.901050   15770 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 19:55:36.901070   15770 cache_images.go:86] Images are preloaded, skipping loading
	I1227 19:55:36.901079   15770 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 19:55:36.901174   15770 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-416077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-416077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 19:55:36.901247   15770 ssh_runner.go:195] Run: crio config
	I1227 19:55:36.943817   15770 cni.go:84] Creating CNI manager for ""
	I1227 19:55:36.943836   15770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 19:55:36.943851   15770 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 19:55:36.943870   15770 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-416077 NodeName:addons-416077 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 19:55:36.944001   15770 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-416077"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 19:55:36.944055   15770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 19:55:36.951412   15770 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 19:55:36.951461   15770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 19:55:36.958582   15770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 19:55:36.969814   15770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 19:55:36.983281   15770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1227 19:55:36.994390   15770 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1227 19:55:36.997519   15770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 19:55:37.006196   15770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 19:55:37.080638   15770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 19:55:37.101896   15770 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077 for IP: 192.168.49.2
	I1227 19:55:37.101924   15770 certs.go:195] generating shared ca certs ...
	I1227 19:55:37.101944   15770 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.102071   15770 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 19:55:37.263272   15770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt ...
	I1227 19:55:37.263301   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt: {Name:mk6f27d52a44dcff743911d2a652926eb298fb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.263468   15770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key ...
	I1227 19:55:37.263480   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key: {Name:mkbdb7b8e9120c4f17e78f7f722109d2b11e7f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.263550   15770 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 19:55:37.338881   15770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt ...
	I1227 19:55:37.338908   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt: {Name:mk7f71150ce80abc0ad9c5eb0d331ebb740a03ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.339059   15770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key ...
	I1227 19:55:37.339070   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key: {Name:mke23e7b4de4c624697c9521ced8134b5676bde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.339136   15770 certs.go:257] generating profile certs ...
	I1227 19:55:37.339221   15770 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.key
	I1227 19:55:37.339242   15770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt with IP's: []
	I1227 19:55:37.609494   15770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt ...
	I1227 19:55:37.609522   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: {Name:mkcbfa03790b69e5a6772747e3278672cd014f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.609677   15770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.key ...
	I1227 19:55:37.609688   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.key: {Name:mk8351a3b1891b362c1ae1fb6762f92b392587ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.609756   15770 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.key.6113bef7
	I1227 19:55:37.609773   15770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.crt.6113bef7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1227 19:55:37.895749   15770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.crt.6113bef7 ...
	I1227 19:55:37.895772   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.crt.6113bef7: {Name:mkdb764daa0642c5ad210feb3d01a43aefd7f402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.895938   15770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.key.6113bef7 ...
	I1227 19:55:37.895955   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.key.6113bef7: {Name:mkd0f4560ae9773e9ee7fd4d03da3c6136b6df74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.896039   15770 certs.go:382] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.crt.6113bef7 -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.crt
	I1227 19:55:37.896124   15770 certs.go:386] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.key.6113bef7 -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.key
	I1227 19:55:37.896179   15770 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/proxy-client.key
	I1227 19:55:37.896196   15770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/proxy-client.crt with IP's: []
	I1227 19:55:37.923776   15770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/proxy-client.crt ...
	I1227 19:55:37.923806   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/proxy-client.crt: {Name:mk34fd59c3f3a5dde4d578e73e4ca9d14287217b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.923944   15770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/proxy-client.key ...
	I1227 19:55:37.923956   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/proxy-client.key: {Name:mk2f3f87a04d6e29824119a6f6a6fcc3f162aa8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:37.924124   15770 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 19:55:37.924159   15770 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 19:55:37.924183   15770 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 19:55:37.924206   15770 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 19:55:37.924755   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 19:55:37.941731   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 19:55:37.957365   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 19:55:37.972940   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 19:55:37.988547   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 19:55:38.004341   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 19:55:38.020560   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 19:55:38.035985   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 19:55:38.051561   15770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 19:55:38.068593   15770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 19:55:38.079571   15770 ssh_runner.go:195] Run: openssl version
	I1227 19:55:38.084841   15770 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 19:55:38.091212   15770 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 19:55:38.099657   15770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 19:55:38.102794   15770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 19:55:38.102835   15770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 19:55:38.135688   15770 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 19:55:38.142471   15770 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 19:55:38.149077   15770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 19:55:38.152269   15770 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 19:55:38.152318   15770 kubeadm.go:401] StartCluster: {Name:addons-416077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-416077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 19:55:38.152397   15770 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:55:38.152445   15770 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:55:38.176404   15770 cri.go:96] found id: ""
	I1227 19:55:38.176475   15770 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 19:55:38.183899   15770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 19:55:38.191049   15770 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 19:55:38.191090   15770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 19:55:38.197950   15770 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 19:55:38.197963   15770 kubeadm.go:158] found existing configuration files:
	
	I1227 19:55:38.198007   15770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 19:55:38.204849   15770 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 19:55:38.204883   15770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 19:55:38.211292   15770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 19:55:38.218037   15770 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 19:55:38.218093   15770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 19:55:38.224452   15770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 19:55:38.230923   15770 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 19:55:38.230958   15770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 19:55:38.237395   15770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 19:55:38.243892   15770 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 19:55:38.243950   15770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 19:55:38.250300   15770 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 19:55:38.343293   15770 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 19:55:38.395401   15770 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 19:55:44.790643   15770 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 19:55:44.790722   15770 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 19:55:44.790813   15770 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 19:55:44.790859   15770 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 19:55:44.790889   15770 kubeadm.go:319] OS: Linux
	I1227 19:55:44.790960   15770 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 19:55:44.791022   15770 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 19:55:44.791074   15770 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 19:55:44.791117   15770 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 19:55:44.791159   15770 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 19:55:44.791239   15770 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 19:55:44.791305   15770 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 19:55:44.791358   15770 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 19:55:44.791441   15770 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 19:55:44.791568   15770 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 19:55:44.791698   15770 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 19:55:44.791799   15770 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 19:55:44.794144   15770 out.go:252]   - Generating certificates and keys ...
	I1227 19:55:44.794233   15770 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 19:55:44.794304   15770 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 19:55:44.794387   15770 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 19:55:44.794465   15770 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 19:55:44.794556   15770 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 19:55:44.794637   15770 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 19:55:44.794722   15770 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 19:55:44.794894   15770 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-416077 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 19:55:44.795030   15770 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 19:55:44.795206   15770 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-416077 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 19:55:44.795274   15770 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 19:55:44.795362   15770 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 19:55:44.795400   15770 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 19:55:44.795445   15770 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 19:55:44.795489   15770 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 19:55:44.795535   15770 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 19:55:44.795578   15770 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 19:55:44.795660   15770 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 19:55:44.795718   15770 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 19:55:44.795794   15770 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 19:55:44.795848   15770 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 19:55:44.797064   15770 out.go:252]   - Booting up control plane ...
	I1227 19:55:44.797132   15770 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 19:55:44.797213   15770 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 19:55:44.797283   15770 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 19:55:44.797386   15770 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 19:55:44.797477   15770 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 19:55:44.797574   15770 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 19:55:44.797646   15770 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 19:55:44.797683   15770 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 19:55:44.797827   15770 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 19:55:44.797983   15770 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 19:55:44.798075   15770 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.632554ms
	I1227 19:55:44.798220   15770 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 19:55:44.798323   15770 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1227 19:55:44.798402   15770 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 19:55:44.798468   15770 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 19:55:44.798528   15770 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004506535s
	I1227 19:55:44.798593   15770 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.302904383s
	I1227 19:55:44.798667   15770 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001433065s
	I1227 19:55:44.798767   15770 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 19:55:44.798937   15770 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 19:55:44.799019   15770 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 19:55:44.799187   15770 kubeadm.go:319] [mark-control-plane] Marking the node addons-416077 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 19:55:44.799237   15770 kubeadm.go:319] [bootstrap-token] Using token: mflgao.302mz762c1mlqn9w
	I1227 19:55:44.800976   15770 out.go:252]   - Configuring RBAC rules ...
	I1227 19:55:44.801058   15770 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 19:55:44.801153   15770 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 19:55:44.801281   15770 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 19:55:44.801406   15770 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 19:55:44.801562   15770 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 19:55:44.801651   15770 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 19:55:44.801755   15770 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 19:55:44.801796   15770 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 19:55:44.801834   15770 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 19:55:44.801839   15770 kubeadm.go:319] 
	I1227 19:55:44.801905   15770 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 19:55:44.801935   15770 kubeadm.go:319] 
	I1227 19:55:44.802028   15770 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 19:55:44.802042   15770 kubeadm.go:319] 
	I1227 19:55:44.802082   15770 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 19:55:44.802153   15770 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 19:55:44.802233   15770 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 19:55:44.802242   15770 kubeadm.go:319] 
	I1227 19:55:44.802320   15770 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 19:55:44.802328   15770 kubeadm.go:319] 
	I1227 19:55:44.802397   15770 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 19:55:44.802414   15770 kubeadm.go:319] 
	I1227 19:55:44.802483   15770 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 19:55:44.802543   15770 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 19:55:44.802598   15770 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 19:55:44.802604   15770 kubeadm.go:319] 
	I1227 19:55:44.802671   15770 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 19:55:44.802733   15770 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 19:55:44.802738   15770 kubeadm.go:319] 
	I1227 19:55:44.802814   15770 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mflgao.302mz762c1mlqn9w \
	I1227 19:55:44.802903   15770 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 \
	I1227 19:55:44.802950   15770 kubeadm.go:319] 	--control-plane 
	I1227 19:55:44.802959   15770 kubeadm.go:319] 
	I1227 19:55:44.803031   15770 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 19:55:44.803037   15770 kubeadm.go:319] 
	I1227 19:55:44.803112   15770 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mflgao.302mz762c1mlqn9w \
	I1227 19:55:44.803211   15770 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 
	I1227 19:55:44.803220   15770 cni.go:84] Creating CNI manager for ""
	I1227 19:55:44.803226   15770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 19:55:44.804437   15770 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 19:55:44.805352   15770 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 19:55:44.809454   15770 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 19:55:44.809478   15770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 19:55:44.822110   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 19:55:45.032157   15770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 19:55:45.032261   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:45.032381   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-416077 minikube.k8s.io/updated_at=2025_12_27T19_55_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=addons-416077 minikube.k8s.io/primary=true
	I1227 19:55:45.043949   15770 ops.go:34] apiserver oom_adj: -16
	I1227 19:55:45.100325   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:45.600427   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:46.101158   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:46.600428   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:47.101326   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:47.600630   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:48.100793   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:48.601075   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:49.101236   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:49.600769   15770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:55:49.660070   15770 kubeadm.go:1114] duration metric: took 4.627860177s to wait for elevateKubeSystemPrivileges
	I1227 19:55:49.660101   15770 kubeadm.go:403] duration metric: took 11.5077878s to StartCluster
	I1227 19:55:49.660118   15770 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:49.660213   15770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 19:55:49.660602   15770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:49.660785   15770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 19:55:49.660797   15770 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 19:55:49.660847   15770 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1227 19:55:49.660972   15770 addons.go:70] Setting yakd=true in profile "addons-416077"
	I1227 19:55:49.660990   15770 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-416077"
	I1227 19:55:49.661005   15770 addons.go:239] Setting addon yakd=true in "addons-416077"
	I1227 19:55:49.661003   15770 addons.go:70] Setting default-storageclass=true in profile "addons-416077"
	I1227 19:55:49.661039   15770 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:55:49.661047   15770 addons.go:70] Setting gcp-auth=true in profile "addons-416077"
	I1227 19:55:49.661050   15770 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-416077"
	I1227 19:55:49.661058   15770 addons.go:70] Setting storage-provisioner=true in profile "addons-416077"
	I1227 19:55:49.661070   15770 mustload.go:66] Loading cluster: addons-416077
	I1227 19:55:49.661078   15770 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-416077"
	I1227 19:55:49.661085   15770 addons.go:70] Setting volcano=true in profile "addons-416077"
	I1227 19:55:49.661092   15770 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-416077"
	I1227 19:55:49.661101   15770 addons.go:239] Setting addon volcano=true in "addons-416077"
	I1227 19:55:49.661133   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661139   15770 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-416077"
	I1227 19:55:49.661037   15770 addons.go:70] Setting cloud-spanner=true in profile "addons-416077"
	I1227 19:55:49.661156   15770 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-416077"
	I1227 19:55:49.661188   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661188   15770 addons.go:239] Setting addon cloud-spanner=true in "addons-416077"
	I1227 19:55:49.661297   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661316   15770 addons.go:70] Setting ingress=true in profile "addons-416077"
	I1227 19:55:49.661337   15770 addons.go:239] Setting addon ingress=true in "addons-416077"
	I1227 19:55:49.661373   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661444   15770 addons.go:70] Setting volumesnapshots=true in profile "addons-416077"
	I1227 19:55:49.661462   15770 addons.go:239] Setting addon volumesnapshots=true in "addons-416077"
	I1227 19:55:49.661485   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661502   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.661042   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661664   15770 addons.go:70] Setting inspektor-gadget=true in profile "addons-416077"
	I1227 19:55:49.661701   15770 addons.go:239] Setting addon inspektor-gadget=true in "addons-416077"
	I1227 19:55:49.661710   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.661739   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661866   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.661895   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.661964   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.662131   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.662247   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.662597   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.662813   15770 addons.go:70] Setting ingress-dns=true in profile "addons-416077"
	I1227 19:55:49.662831   15770 addons.go:239] Setting addon ingress-dns=true in "addons-416077"
	I1227 19:55:49.663096   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661133   15770 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-416077"
	I1227 19:55:49.663324   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.663797   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.661074   15770 addons.go:239] Setting addon storage-provisioner=true in "addons-416077"
	I1227 19:55:49.663898   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.664185   15770 out.go:179] * Verifying Kubernetes components...
	I1227 19:55:49.661040   15770 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-416077"
	I1227 19:55:49.661299   15770 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:55:49.664400   15770 addons.go:70] Setting metrics-server=true in profile "addons-416077"
	I1227 19:55:49.664948   15770 addons.go:239] Setting addon metrics-server=true in "addons-416077"
	I1227 19:55:49.664979   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.665481   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.661053   15770 addons.go:70] Setting registry-creds=true in profile "addons-416077"
	I1227 19:55:49.666182   15770 addons.go:239] Setting addon registry-creds=true in "addons-416077"
	I1227 19:55:49.666227   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661041   15770 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-416077"
	I1227 19:55:49.668188   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.661048   15770 addons.go:70] Setting registry=true in profile "addons-416077"
	I1227 19:55:49.668356   15770 addons.go:239] Setting addon registry=true in "addons-416077"
	I1227 19:55:49.668384   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.668806   15770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 19:55:49.675574   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.676497   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.679220   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.679288   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.679465   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.683031   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.683426   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.726534   15770 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1227 19:55:49.726609   15770 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1227 19:55:49.728421   15770 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1227 19:55:49.728446   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1227 19:55:49.728509   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.729904   15770 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 19:55:49.730959   15770 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1227 19:55:49.731999   15770 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 19:55:49.734327   15770 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 19:55:49.734353   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1227 19:55:49.734410   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.735068   15770 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 19:55:49.735094   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I1227 19:55:49.735142   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.742369   15770 addons.go:239] Setting addon default-storageclass=true in "addons-416077"
	I1227 19:55:49.742433   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.743022   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	W1227 19:55:49.744237   15770 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1227 19:55:49.747876   15770 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-416077"
	I1227 19:55:49.747930   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.748384   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:49.758367   15770 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1227 19:55:49.759772   15770 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1227 19:55:49.759813   15770 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1227 19:55:49.759887   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.779123   15770 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1227 19:55:49.779195   15770 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1227 19:55:49.781498   15770 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1227 19:55:49.781620   15770 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1227 19:55:49.781646   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1227 19:55:49.781707   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.785635   15770 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1227 19:55:49.787620   15770 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 19:55:49.788755   15770 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 19:55:49.788785   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 19:55:49.788852   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.789548   15770 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1227 19:55:49.790221   15770 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1227 19:55:49.791541   15770 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 19:55:49.791561   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1227 19:55:49.791634   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.791688   15770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 19:55:49.795514   15770 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1227 19:55:49.795572   15770 out.go:179]   - Using image docker.io/registry:3.0.0
	I1227 19:55:49.796633   15770 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1227 19:55:49.796679   15770 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1227 19:55:49.799353   15770 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1227 19:55:49.799499   15770 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1227 19:55:49.799442   15770 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1227 19:55:49.799765   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1227 19:55:49.799973   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.800701   15770 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1227 19:55:49.800723   15770 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1227 19:55:49.800791   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.801899   15770 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1227 19:55:49.803012   15770 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1227 19:55:49.803028   15770 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1227 19:55:49.803518   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.810934   15770 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.6
	I1227 19:55:49.813342   15770 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1227 19:55:49.813365   15770 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1227 19:55:49.813438   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.815059   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:49.816356   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.826871   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.827410   15770 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1227 19:55:49.828608   15770 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 19:55:49.828748   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1227 19:55:49.828884   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.832976   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.847283   15770 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 19:55:49.847309   15770 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 19:55:49.847371   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.854951   15770 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1227 19:55:49.856293   15770 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 19:55:49.856355   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1227 19:55:49.856449   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.873557   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.880541   15770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 19:55:49.884983   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.886594   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.889215   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.889344   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.893749   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.895032   15770 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1227 19:55:49.896269   15770 out.go:179]   - Using image docker.io/busybox:stable
	I1227 19:55:49.897441   15770 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 19:55:49.897561   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1227 19:55:49.897948   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:49.900124   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.900338   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.909159   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.925864   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.929030   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:49.943124   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:50.025289   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1227 19:55:50.025934   15770 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1227 19:55:50.025956   15770 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1227 19:55:50.030719   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 19:55:50.042938   15770 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1227 19:55:50.042964   15770 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1227 19:55:50.069613   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 19:55:50.070609   15770 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1227 19:55:50.070630   15770 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1227 19:55:50.083175   15770 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1227 19:55:50.083209   15770 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1227 19:55:50.091685   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 19:55:50.094191   15770 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1227 19:55:50.094210   15770 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1227 19:55:50.095676   15770 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1227 19:55:50.095700   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1227 19:55:50.096617   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 19:55:50.098544   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1227 19:55:50.100274   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 19:55:50.103862   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 19:55:50.106090   15770 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1227 19:55:50.106105   15770 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1227 19:55:50.116237   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 19:55:50.116584   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 19:55:50.126971   15770 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1227 19:55:50.126994   15770 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1227 19:55:50.143337   15770 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1227 19:55:50.143370   15770 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1227 19:55:50.149285   15770 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1227 19:55:50.149313   15770 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1227 19:55:50.150289   15770 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1227 19:55:50.150310   15770 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1227 19:55:50.152568   15770 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1227 19:55:50.152588   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1227 19:55:50.192405   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1227 19:55:50.197973   15770 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1227 19:55:50.198019   15770 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1227 19:55:50.199167   15770 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 19:55:50.199186   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1227 19:55:50.207579   15770 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1227 19:55:50.207607   15770 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1227 19:55:50.224656   15770 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 19:55:50.224685   15770 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1227 19:55:50.227359   15770 node_ready.go:35] waiting up to 6m0s for node "addons-416077" to be "Ready" ...
	I1227 19:55:50.228008   15770 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1227 19:55:50.256300   15770 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1227 19:55:50.256340   15770 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1227 19:55:50.259521   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 19:55:50.279541   15770 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1227 19:55:50.279569   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I1227 19:55:50.310972   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 19:55:50.319805   15770 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1227 19:55:50.319838   15770 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1227 19:55:50.345662   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1227 19:55:50.369514   15770 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1227 19:55:50.369544   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1227 19:55:50.433072   15770 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1227 19:55:50.433126   15770 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1227 19:55:50.504693   15770 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1227 19:55:50.504732   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1227 19:55:50.552489   15770 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1227 19:55:50.552514   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1227 19:55:50.614609   15770 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1227 19:55:50.614640   15770 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1227 19:55:50.655683   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1227 19:55:50.734379   15770 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-416077" context rescaled to 1 replicas
	I1227 19:55:51.316663   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.285910629s)
	I1227 19:55:51.316778   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.247134889s)
	I1227 19:55:51.316838   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.225133815s)
	I1227 19:55:51.316842   15770 addons.go:495] Verifying addon ingress=true in "addons-416077"
	I1227 19:55:51.317102   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.220460956s)
	I1227 19:55:51.317202   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.218638875s)
	I1227 19:55:51.317264   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.216974514s)
	I1227 19:55:51.317297   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.213396174s)
	I1227 19:55:51.317357   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.201100053s)
	I1227 19:55:51.317462   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.200844171s)
	I1227 19:55:51.317539   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.125093518s)
	I1227 19:55:51.317570   15770 addons.go:495] Verifying addon registry=true in "addons-416077"
	I1227 19:55:51.318346   15770 out.go:179] * Verifying ingress addon...
	I1227 19:55:51.319161   15770 out.go:179] * Verifying registry addon...
	I1227 19:55:51.320198   15770 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1227 19:55:51.321526   15770 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1227 19:55:51.335108   15770 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1227 19:55:51.335133   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:51.335758   15770 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 19:55:51.335775   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1227 19:55:51.336487   15770 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1227 19:55:51.807462   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.547884799s)
	W1227 19:55:51.807512   15770 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1227 19:55:51.807549   15770 retry.go:84] will retry after 200ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1227 19:55:51.807625   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.49661903s)
	I1227 19:55:51.807647   15770 addons.go:495] Verifying addon metrics-server=true in "addons-416077"
	I1227 19:55:51.807700   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.462006731s)
	I1227 19:55:51.808081   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.15235344s)
	I1227 19:55:51.808124   15770 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-416077"
	I1227 19:55:51.812960   15770 out.go:179] * Verifying csi-hostpath-driver addon...
	I1227 19:55:51.813030   15770 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-416077 service yakd-dashboard -n yakd-dashboard
	
	I1227 19:55:51.815288   15770 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1227 19:55:51.820792   15770 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 19:55:51.820857   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:51.830836   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:51.831093   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:52.058382   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1227 19:55:52.230081   15770 node_ready.go:57] node "addons-416077" has "Ready":"False" status (will retry)
	I1227 19:55:52.318399   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:52.418827   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:52.418853   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:52.818440   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:52.919429   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:52.919494   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:53.319192   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:53.323270   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:53.324133   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:53.818194   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:53.918683   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:53.918754   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1227 19:55:54.230291   15770 node_ready.go:57] node "addons-416077" has "Ready":"False" status (will retry)
	I1227 19:55:54.318114   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:54.323258   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:54.323829   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:54.518670   15770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.460248643s)
	I1227 19:55:54.818363   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:54.919483   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:54.919619   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:55.318391   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:55.323164   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:55.324246   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:55.819146   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:55.919984   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:55.920200   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:56.318636   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:56.322550   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:56.323480   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1227 19:55:56.730374   15770 node_ready.go:57] node "addons-416077" has "Ready":"False" status (will retry)
	I1227 19:55:56.819106   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:56.919764   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:56.919978   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:57.319730   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:57.322562   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:57.323424   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:57.421372   15770 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1227 19:55:57.421434   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:57.437964   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:57.531509   15770 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1227 19:55:57.542798   15770 addons.go:239] Setting addon gcp-auth=true in "addons-416077"
	I1227 19:55:57.542845   15770 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:55:57.543227   15770 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:55:57.560411   15770 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1227 19:55:57.560449   15770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:55:57.576293   15770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:55:57.663114   15770 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 19:55:57.664630   15770 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1227 19:55:57.665621   15770 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1227 19:55:57.665633   15770 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1227 19:55:57.677657   15770 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1227 19:55:57.677672   15770 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1227 19:55:57.689313   15770 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1227 19:55:57.689332   15770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1227 19:55:57.700894   15770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1227 19:55:57.819064   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:57.823136   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:57.824382   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:57.989847   15770 addons.go:495] Verifying addon gcp-auth=true in "addons-416077"
	I1227 19:55:57.991144   15770 out.go:179] * Verifying gcp-auth addon...
	I1227 19:55:57.993346   15770 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1227 19:55:57.995383   15770 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1227 19:55:57.995397   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:55:58.318547   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:58.322473   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:58.323440   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:58.495881   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1227 19:55:58.730581   15770 node_ready.go:57] node "addons-416077" has "Ready":"False" status (will retry)
	I1227 19:55:58.818456   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:58.822439   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:58.823783   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:58.996196   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:55:59.317986   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:59.323040   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:59.323700   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:59.496211   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:55:59.818019   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:55:59.822928   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:55:59.823963   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:55:59.996641   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:00.318467   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:00.322879   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:00.323408   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:00.495927   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:00.818350   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:00.822047   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:00.823426   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:00.995703   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1227 19:56:01.230112   15770 node_ready.go:57] node "addons-416077" has "Ready":"False" status (will retry)
	I1227 19:56:01.318681   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:01.322620   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:01.323578   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:01.496185   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:01.817787   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:01.822743   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:01.823597   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:01.995955   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:02.318615   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:02.322610   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:02.323351   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:02.495858   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:02.825870   15770 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 19:56:02.825898   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:02.826468   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:02.826660   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:02.996219   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:03.232428   15770 node_ready.go:49] node "addons-416077" is "Ready"
	I1227 19:56:03.232459   15770 node_ready.go:38] duration metric: took 13.005065735s for node "addons-416077" to be "Ready" ...
	I1227 19:56:03.232474   15770 api_server.go:52] waiting for apiserver process to appear ...
	I1227 19:56:03.232529   15770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 19:56:03.253743   15770 api_server.go:72] duration metric: took 13.592921545s to wait for apiserver process to appear ...
	I1227 19:56:03.253772   15770 api_server.go:88] waiting for apiserver healthz status ...
	I1227 19:56:03.253794   15770 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 19:56:03.258787   15770 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 19:56:03.259871   15770 api_server.go:141] control plane version: v1.35.0
	I1227 19:56:03.259900   15770 api_server.go:131] duration metric: took 6.119754ms to wait for apiserver health ...
	I1227 19:56:03.259930   15770 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 19:56:03.264086   15770 system_pods.go:59] 20 kube-system pods found
	I1227 19:56:03.264129   15770 system_pods.go:61] "amd-gpu-device-plugin-qn65m" [7b2a13a1-3c72-4975-9585-3d01f151c548] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 19:56:03.264149   15770 system_pods.go:61] "coredns-7d764666f9-7l6tj" [d53972fe-9d23-40fd-9b37-36561cc9bf04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 19:56:03.264160   15770 system_pods.go:61] "csi-hostpath-attacher-0" [e4a001d0-022e-44af-8dc1-88fdcf077a41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 19:56:03.264169   15770 system_pods.go:61] "csi-hostpath-resizer-0" [b5d1cb68-001f-4c12-8121-1609511d544f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 19:56:03.264177   15770 system_pods.go:61] "csi-hostpathplugin-fq7b4" [fdf4583b-5df9-4b45-ad97-b9d7086735ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 19:56:03.264185   15770 system_pods.go:61] "etcd-addons-416077" [b7a9f1bf-f2ae-4e42-a09d-c9bc6fc1d519] Running
	I1227 19:56:03.264190   15770 system_pods.go:61] "kindnet-g8dlg" [1ba6e2c5-3475-4033-9fe5-4a09af712b1b] Running
	I1227 19:56:03.264195   15770 system_pods.go:61] "kube-apiserver-addons-416077" [777f989a-516c-428f-a876-7d2942b2037b] Running
	I1227 19:56:03.264200   15770 system_pods.go:61] "kube-controller-manager-addons-416077" [d8095884-eca7-4988-9142-6fec044c25ce] Running
	I1227 19:56:03.264207   15770 system_pods.go:61] "kube-ingress-dns-minikube" [6ef27a4a-1d30-4c73-a326-cbf7afa607fb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 19:56:03.264212   15770 system_pods.go:61] "kube-proxy-zsqwg" [61940508-4941-4d96-9a74-a763619ed450] Running
	I1227 19:56:03.264217   15770 system_pods.go:61] "kube-scheduler-addons-416077" [ec8a290a-6436-485a-b23c-670b59446990] Running
	I1227 19:56:03.264224   15770 system_pods.go:61] "metrics-server-5778bb4788-m9rtg" [3d8a12cc-9912-493e-b158-f6a7a1c5a8bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 19:56:03.264232   15770 system_pods.go:61] "nvidia-device-plugin-daemonset-vqxk8" [d519c5f2-43a2-436d-804a-b2d032e7076e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 19:56:03.264246   15770 system_pods.go:61] "registry-788cd7d5bc-98b8s" [4ab3c6c7-add5-435f-98d6-6f17591d3018] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 19:56:03.264261   15770 system_pods.go:61] "registry-creds-567fb78d95-dwfd6" [096babf5-8e2c-4bab-9054-a587a6f0d942] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 19:56:03.264271   15770 system_pods.go:61] "registry-proxy-k2x7x" [c8b8d81c-61d1-4f10-b00a-9e224d8314a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 19:56:03.264284   15770 system_pods.go:61] "snapshot-controller-6588d87457-49mhg" [1a00b668-e32b-48df-9529-e32f8db28a95] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:03.264296   15770 system_pods.go:61] "snapshot-controller-6588d87457-6drpv" [fbba2e27-5c74-4322-b57c-92bc72cbdfb8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:03.264307   15770 system_pods.go:61] "storage-provisioner" [eb95be12-63e4-4280-b3aa-51a9b805286c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 19:56:03.264316   15770 system_pods.go:74] duration metric: took 4.376827ms to wait for pod list to return data ...
	I1227 19:56:03.264327   15770 default_sa.go:34] waiting for default service account to be created ...
	I1227 19:56:03.267297   15770 default_sa.go:45] found service account: "default"
	I1227 19:56:03.267319   15770 default_sa.go:55] duration metric: took 2.982026ms for default service account to be created ...
	I1227 19:56:03.267329   15770 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 19:56:03.363255   15770 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 19:56:03.363277   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:03.363393   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:03.363565   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:03.364428   15770 system_pods.go:86] 20 kube-system pods found
	I1227 19:56:03.364451   15770 system_pods.go:89] "amd-gpu-device-plugin-qn65m" [7b2a13a1-3c72-4975-9585-3d01f151c548] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 19:56:03.364458   15770 system_pods.go:89] "coredns-7d764666f9-7l6tj" [d53972fe-9d23-40fd-9b37-36561cc9bf04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 19:56:03.364464   15770 system_pods.go:89] "csi-hostpath-attacher-0" [e4a001d0-022e-44af-8dc1-88fdcf077a41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 19:56:03.364470   15770 system_pods.go:89] "csi-hostpath-resizer-0" [b5d1cb68-001f-4c12-8121-1609511d544f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 19:56:03.364476   15770 system_pods.go:89] "csi-hostpathplugin-fq7b4" [fdf4583b-5df9-4b45-ad97-b9d7086735ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 19:56:03.364481   15770 system_pods.go:89] "etcd-addons-416077" [b7a9f1bf-f2ae-4e42-a09d-c9bc6fc1d519] Running
	I1227 19:56:03.364486   15770 system_pods.go:89] "kindnet-g8dlg" [1ba6e2c5-3475-4033-9fe5-4a09af712b1b] Running
	I1227 19:56:03.364490   15770 system_pods.go:89] "kube-apiserver-addons-416077" [777f989a-516c-428f-a876-7d2942b2037b] Running
	I1227 19:56:03.364494   15770 system_pods.go:89] "kube-controller-manager-addons-416077" [d8095884-eca7-4988-9142-6fec044c25ce] Running
	I1227 19:56:03.364501   15770 system_pods.go:89] "kube-ingress-dns-minikube" [6ef27a4a-1d30-4c73-a326-cbf7afa607fb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 19:56:03.364505   15770 system_pods.go:89] "kube-proxy-zsqwg" [61940508-4941-4d96-9a74-a763619ed450] Running
	I1227 19:56:03.364508   15770 system_pods.go:89] "kube-scheduler-addons-416077" [ec8a290a-6436-485a-b23c-670b59446990] Running
	I1227 19:56:03.364512   15770 system_pods.go:89] "metrics-server-5778bb4788-m9rtg" [3d8a12cc-9912-493e-b158-f6a7a1c5a8bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 19:56:03.364518   15770 system_pods.go:89] "nvidia-device-plugin-daemonset-vqxk8" [d519c5f2-43a2-436d-804a-b2d032e7076e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 19:56:03.364523   15770 system_pods.go:89] "registry-788cd7d5bc-98b8s" [4ab3c6c7-add5-435f-98d6-6f17591d3018] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 19:56:03.364527   15770 system_pods.go:89] "registry-creds-567fb78d95-dwfd6" [096babf5-8e2c-4bab-9054-a587a6f0d942] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 19:56:03.364535   15770 system_pods.go:89] "registry-proxy-k2x7x" [c8b8d81c-61d1-4f10-b00a-9e224d8314a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 19:56:03.364539   15770 system_pods.go:89] "snapshot-controller-6588d87457-49mhg" [1a00b668-e32b-48df-9529-e32f8db28a95] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:03.364545   15770 system_pods.go:89] "snapshot-controller-6588d87457-6drpv" [fbba2e27-5c74-4322-b57c-92bc72cbdfb8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:03.364550   15770 system_pods.go:89] "storage-provisioner" [eb95be12-63e4-4280-b3aa-51a9b805286c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 19:56:03.364570   15770 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 19:56:03.496602   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:03.618255   15770 system_pods.go:86] 20 kube-system pods found
	I1227 19:56:03.618283   15770 system_pods.go:89] "amd-gpu-device-plugin-qn65m" [7b2a13a1-3c72-4975-9585-3d01f151c548] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 19:56:03.618291   15770 system_pods.go:89] "coredns-7d764666f9-7l6tj" [d53972fe-9d23-40fd-9b37-36561cc9bf04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 19:56:03.618301   15770 system_pods.go:89] "csi-hostpath-attacher-0" [e4a001d0-022e-44af-8dc1-88fdcf077a41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 19:56:03.618309   15770 system_pods.go:89] "csi-hostpath-resizer-0" [b5d1cb68-001f-4c12-8121-1609511d544f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 19:56:03.618318   15770 system_pods.go:89] "csi-hostpathplugin-fq7b4" [fdf4583b-5df9-4b45-ad97-b9d7086735ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 19:56:03.618327   15770 system_pods.go:89] "etcd-addons-416077" [b7a9f1bf-f2ae-4e42-a09d-c9bc6fc1d519] Running
	I1227 19:56:03.618337   15770 system_pods.go:89] "kindnet-g8dlg" [1ba6e2c5-3475-4033-9fe5-4a09af712b1b] Running
	I1227 19:56:03.618345   15770 system_pods.go:89] "kube-apiserver-addons-416077" [777f989a-516c-428f-a876-7d2942b2037b] Running
	I1227 19:56:03.618349   15770 system_pods.go:89] "kube-controller-manager-addons-416077" [d8095884-eca7-4988-9142-6fec044c25ce] Running
	I1227 19:56:03.618356   15770 system_pods.go:89] "kube-ingress-dns-minikube" [6ef27a4a-1d30-4c73-a326-cbf7afa607fb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 19:56:03.618361   15770 system_pods.go:89] "kube-proxy-zsqwg" [61940508-4941-4d96-9a74-a763619ed450] Running
	I1227 19:56:03.618368   15770 system_pods.go:89] "kube-scheduler-addons-416077" [ec8a290a-6436-485a-b23c-670b59446990] Running
	I1227 19:56:03.618374   15770 system_pods.go:89] "metrics-server-5778bb4788-m9rtg" [3d8a12cc-9912-493e-b158-f6a7a1c5a8bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 19:56:03.618388   15770 system_pods.go:89] "nvidia-device-plugin-daemonset-vqxk8" [d519c5f2-43a2-436d-804a-b2d032e7076e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 19:56:03.618397   15770 system_pods.go:89] "registry-788cd7d5bc-98b8s" [4ab3c6c7-add5-435f-98d6-6f17591d3018] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 19:56:03.618404   15770 system_pods.go:89] "registry-creds-567fb78d95-dwfd6" [096babf5-8e2c-4bab-9054-a587a6f0d942] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 19:56:03.618415   15770 system_pods.go:89] "registry-proxy-k2x7x" [c8b8d81c-61d1-4f10-b00a-9e224d8314a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 19:56:03.618426   15770 system_pods.go:89] "snapshot-controller-6588d87457-49mhg" [1a00b668-e32b-48df-9529-e32f8db28a95] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:03.618435   15770 system_pods.go:89] "snapshot-controller-6588d87457-6drpv" [fbba2e27-5c74-4322-b57c-92bc72cbdfb8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:03.618446   15770 system_pods.go:89] "storage-provisioner" [eb95be12-63e4-4280-b3aa-51a9b805286c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 19:56:03.818486   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:03.822276   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:03.823819   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:03.932863   15770 system_pods.go:86] 20 kube-system pods found
	I1227 19:56:03.932903   15770 system_pods.go:89] "amd-gpu-device-plugin-qn65m" [7b2a13a1-3c72-4975-9585-3d01f151c548] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 19:56:03.932932   15770 system_pods.go:89] "coredns-7d764666f9-7l6tj" [d53972fe-9d23-40fd-9b37-36561cc9bf04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 19:56:03.932942   15770 system_pods.go:89] "csi-hostpath-attacher-0" [e4a001d0-022e-44af-8dc1-88fdcf077a41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 19:56:03.932951   15770 system_pods.go:89] "csi-hostpath-resizer-0" [b5d1cb68-001f-4c12-8121-1609511d544f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 19:56:03.932962   15770 system_pods.go:89] "csi-hostpathplugin-fq7b4" [fdf4583b-5df9-4b45-ad97-b9d7086735ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 19:56:03.932969   15770 system_pods.go:89] "etcd-addons-416077" [b7a9f1bf-f2ae-4e42-a09d-c9bc6fc1d519] Running
	I1227 19:56:03.932977   15770 system_pods.go:89] "kindnet-g8dlg" [1ba6e2c5-3475-4033-9fe5-4a09af712b1b] Running
	I1227 19:56:03.932989   15770 system_pods.go:89] "kube-apiserver-addons-416077" [777f989a-516c-428f-a876-7d2942b2037b] Running
	I1227 19:56:03.932997   15770 system_pods.go:89] "kube-controller-manager-addons-416077" [d8095884-eca7-4988-9142-6fec044c25ce] Running
	I1227 19:56:03.933007   15770 system_pods.go:89] "kube-ingress-dns-minikube" [6ef27a4a-1d30-4c73-a326-cbf7afa607fb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 19:56:03.933014   15770 system_pods.go:89] "kube-proxy-zsqwg" [61940508-4941-4d96-9a74-a763619ed450] Running
	I1227 19:56:03.933022   15770 system_pods.go:89] "kube-scheduler-addons-416077" [ec8a290a-6436-485a-b23c-670b59446990] Running
	I1227 19:56:03.933031   15770 system_pods.go:89] "metrics-server-5778bb4788-m9rtg" [3d8a12cc-9912-493e-b158-f6a7a1c5a8bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 19:56:03.933042   15770 system_pods.go:89] "nvidia-device-plugin-daemonset-vqxk8" [d519c5f2-43a2-436d-804a-b2d032e7076e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 19:56:03.933053   15770 system_pods.go:89] "registry-788cd7d5bc-98b8s" [4ab3c6c7-add5-435f-98d6-6f17591d3018] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 19:56:03.933063   15770 system_pods.go:89] "registry-creds-567fb78d95-dwfd6" [096babf5-8e2c-4bab-9054-a587a6f0d942] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 19:56:03.933075   15770 system_pods.go:89] "registry-proxy-k2x7x" [c8b8d81c-61d1-4f10-b00a-9e224d8314a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 19:56:03.933087   15770 system_pods.go:89] "snapshot-controller-6588d87457-49mhg" [1a00b668-e32b-48df-9529-e32f8db28a95] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:03.933099   15770 system_pods.go:89] "snapshot-controller-6588d87457-6drpv" [fbba2e27-5c74-4322-b57c-92bc72cbdfb8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:03.933126   15770 system_pods.go:89] "storage-provisioner" [eb95be12-63e4-4280-b3aa-51a9b805286c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 19:56:03.997052   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:04.287497   15770 system_pods.go:86] 20 kube-system pods found
	I1227 19:56:04.287537   15770 system_pods.go:89] "amd-gpu-device-plugin-qn65m" [7b2a13a1-3c72-4975-9585-3d01f151c548] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1227 19:56:04.287544   15770 system_pods.go:89] "coredns-7d764666f9-7l6tj" [d53972fe-9d23-40fd-9b37-36561cc9bf04] Running
	I1227 19:56:04.287554   15770 system_pods.go:89] "csi-hostpath-attacher-0" [e4a001d0-022e-44af-8dc1-88fdcf077a41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 19:56:04.287563   15770 system_pods.go:89] "csi-hostpath-resizer-0" [b5d1cb68-001f-4c12-8121-1609511d544f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 19:56:04.287571   15770 system_pods.go:89] "csi-hostpathplugin-fq7b4" [fdf4583b-5df9-4b45-ad97-b9d7086735ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 19:56:04.287577   15770 system_pods.go:89] "etcd-addons-416077" [b7a9f1bf-f2ae-4e42-a09d-c9bc6fc1d519] Running
	I1227 19:56:04.287582   15770 system_pods.go:89] "kindnet-g8dlg" [1ba6e2c5-3475-4033-9fe5-4a09af712b1b] Running
	I1227 19:56:04.287587   15770 system_pods.go:89] "kube-apiserver-addons-416077" [777f989a-516c-428f-a876-7d2942b2037b] Running
	I1227 19:56:04.287592   15770 system_pods.go:89] "kube-controller-manager-addons-416077" [d8095884-eca7-4988-9142-6fec044c25ce] Running
	I1227 19:56:04.287600   15770 system_pods.go:89] "kube-ingress-dns-minikube" [6ef27a4a-1d30-4c73-a326-cbf7afa607fb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 19:56:04.287604   15770 system_pods.go:89] "kube-proxy-zsqwg" [61940508-4941-4d96-9a74-a763619ed450] Running
	I1227 19:56:04.287610   15770 system_pods.go:89] "kube-scheduler-addons-416077" [ec8a290a-6436-485a-b23c-670b59446990] Running
	I1227 19:56:04.287618   15770 system_pods.go:89] "metrics-server-5778bb4788-m9rtg" [3d8a12cc-9912-493e-b158-f6a7a1c5a8bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 19:56:04.287627   15770 system_pods.go:89] "nvidia-device-plugin-daemonset-vqxk8" [d519c5f2-43a2-436d-804a-b2d032e7076e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 19:56:04.287635   15770 system_pods.go:89] "registry-788cd7d5bc-98b8s" [4ab3c6c7-add5-435f-98d6-6f17591d3018] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 19:56:04.287643   15770 system_pods.go:89] "registry-creds-567fb78d95-dwfd6" [096babf5-8e2c-4bab-9054-a587a6f0d942] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 19:56:04.287652   15770 system_pods.go:89] "registry-proxy-k2x7x" [c8b8d81c-61d1-4f10-b00a-9e224d8314a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 19:56:04.287660   15770 system_pods.go:89] "snapshot-controller-6588d87457-49mhg" [1a00b668-e32b-48df-9529-e32f8db28a95] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:04.287682   15770 system_pods.go:89] "snapshot-controller-6588d87457-6drpv" [fbba2e27-5c74-4322-b57c-92bc72cbdfb8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:04.287689   15770 system_pods.go:89] "storage-provisioner" [eb95be12-63e4-4280-b3aa-51a9b805286c] Running
	I1227 19:56:04.287698   15770 system_pods.go:126] duration metric: took 1.020363466s to wait for k8s-apps to be running ...
	I1227 19:56:04.287707   15770 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 19:56:04.287756   15770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 19:56:04.306651   15770 system_svc.go:56] duration metric: took 18.935691ms WaitForService to wait for kubelet
	I1227 19:56:04.306678   15770 kubeadm.go:587] duration metric: took 14.645860004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 19:56:04.306706   15770 node_conditions.go:102] verifying NodePressure condition ...
	I1227 19:56:04.309423   15770 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 19:56:04.309451   15770 node_conditions.go:123] node cpu capacity is 8
	I1227 19:56:04.309467   15770 node_conditions.go:105] duration metric: took 2.755172ms to run NodePressure ...
	I1227 19:56:04.309480   15770 start.go:242] waiting for startup goroutines ...
	I1227 19:56:04.386854   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:04.387656   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:04.388177   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:04.498666   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:04.819623   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:04.822697   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:04.824505   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:04.996393   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:05.319872   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:05.323114   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:05.323828   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:05.496932   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:05.818880   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:05.823342   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:05.823956   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:05.996843   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:06.318862   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:06.323288   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:06.323930   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:06.496979   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:06.819856   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:06.823115   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:06.823864   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:06.996678   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:07.318820   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:07.323292   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:07.324216   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:07.497258   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:07.847239   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:07.847296   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:07.847311   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:07.996698   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:08.318562   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:08.323181   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:08.324651   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:08.496534   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:08.820764   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:08.826380   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:08.826803   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:08.997165   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:09.319211   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:09.323058   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:09.324013   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:09.496825   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:09.819176   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:09.823676   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:09.824200   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:09.997106   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:10.319033   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:10.323002   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:10.323771   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:10.496739   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:10.819216   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:10.823463   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:10.824211   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:10.996888   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:11.319187   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:11.419996   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:11.419996   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:11.496209   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:11.818902   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:11.822812   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:11.823761   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:11.996331   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:12.318632   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:12.322928   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:12.323619   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:12.496716   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:12.818988   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:12.823166   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:12.823970   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:12.996733   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:13.319797   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:13.323128   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:13.323509   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:13.496560   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:13.818950   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:13.823121   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:13.824013   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:13.997034   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:14.318632   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:14.322543   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:14.323443   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:14.496630   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:14.818738   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:14.822757   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:14.823669   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:14.996448   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:15.319441   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:15.323664   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:15.324279   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:15.497396   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:15.818593   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:15.919280   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:15.919319   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:16.019968   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:16.319169   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:16.323404   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:16.324214   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:16.497139   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:16.818825   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:16.822782   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:16.823769   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:16.996303   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:17.319171   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:17.322975   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:17.323817   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:17.496852   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:17.818597   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:17.822489   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:17.823408   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:17.996035   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:18.319515   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:18.322365   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:18.324042   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:18.497270   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:18.819366   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:18.823593   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:18.824611   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:18.996400   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:19.318715   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:19.322908   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:19.323763   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:19.496886   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:19.818906   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:19.822697   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:19.823703   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:19.996170   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:20.320041   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:20.323382   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:20.324135   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:20.496194   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:20.819069   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:20.823068   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:20.824011   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:21.013849   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:21.320589   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:21.322635   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:21.324336   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:21.496754   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:21.819038   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:21.823190   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:21.824110   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:21.997151   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:22.319098   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:22.323366   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:22.324249   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:22.497213   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:22.819340   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:22.823630   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:22.824419   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:22.996772   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:23.319217   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:23.323508   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:23.324226   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:23.497600   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:23.819301   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:23.823629   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:23.824150   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:23.996928   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:24.318821   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:24.322810   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:24.323645   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:24.496616   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:24.818865   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:24.823072   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:24.824165   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:24.996968   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:25.319350   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:25.323325   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:25.324307   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:25.552970   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:25.818838   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:25.822706   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:25.823744   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:25.996446   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:26.319202   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:26.324268   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:26.324360   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:26.496309   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:26.819672   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:26.822543   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:26.824227   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:26.996760   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:27.319063   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:27.322498   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:27.323600   15770 kapi.go:107] duration metric: took 36.002074698s to wait for kubernetes.io/minikube-addons=registry ...
	I1227 19:56:27.497106   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:27.818984   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:27.823479   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:27.996390   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:28.319746   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:28.322576   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:28.496417   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:28.818477   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:28.822426   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:28.996022   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:29.319314   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:29.323544   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:29.496588   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:29.818568   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:29.824613   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:29.996042   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:30.319379   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:30.323873   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:30.496305   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:30.818812   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:30.823059   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:30.996473   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:31.321533   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:31.323182   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:31.497176   15770 kapi.go:107] duration metric: took 33.503834884s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1227 19:56:31.498808   15770 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-416077 cluster.
	I1227 19:56:31.500297   15770 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1227 19:56:31.501661   15770 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1227 19:56:31.824095   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:31.826707   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:32.318544   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:32.322579   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:32.818075   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:32.822849   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:33.319740   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:33.322995   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:33.819374   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:33.823350   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:34.319093   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:34.323465   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:34.862842   15770 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:34.863015   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:35.334896   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:35.335609   15770 kapi.go:107] duration metric: took 43.520318972s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1227 19:56:35.824587   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:36.324565   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:36.823859   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:37.323982   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:37.823621   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:38.324748   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:38.823957   15770 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:39.322441   15770 kapi.go:107] duration metric: took 48.002242678s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1227 19:56:39.323908   15770 out.go:179] * Enabled addons: cloud-spanner, registry-creds, nvidia-device-plugin, inspektor-gadget, ingress-dns, amd-gpu-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1227 19:56:39.325389   15770 addons.go:530] duration metric: took 49.664539579s for enable addons: enabled=[cloud-spanner registry-creds nvidia-device-plugin inspektor-gadget ingress-dns amd-gpu-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1227 19:56:39.325431   15770 start.go:247] waiting for cluster config update ...
	I1227 19:56:39.325455   15770 start.go:256] writing updated cluster config ...
	I1227 19:56:39.325754   15770 ssh_runner.go:195] Run: rm -f paused
	I1227 19:56:39.329579   15770 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 19:56:39.331827   15770 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7l6tj" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:39.335024   15770 pod_ready.go:94] pod "coredns-7d764666f9-7l6tj" is "Ready"
	I1227 19:56:39.335044   15770 pod_ready.go:86] duration metric: took 3.193663ms for pod "coredns-7d764666f9-7l6tj" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:39.336547   15770 pod_ready.go:83] waiting for pod "etcd-addons-416077" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:39.339358   15770 pod_ready.go:94] pod "etcd-addons-416077" is "Ready"
	I1227 19:56:39.339380   15770 pod_ready.go:86] duration metric: took 2.813559ms for pod "etcd-addons-416077" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:39.340931   15770 pod_ready.go:83] waiting for pod "kube-apiserver-addons-416077" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:39.343821   15770 pod_ready.go:94] pod "kube-apiserver-addons-416077" is "Ready"
	I1227 19:56:39.343835   15770 pod_ready.go:86] duration metric: took 2.886103ms for pod "kube-apiserver-addons-416077" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:39.345300   15770 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-416077" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:39.733160   15770 pod_ready.go:94] pod "kube-controller-manager-addons-416077" is "Ready"
	I1227 19:56:39.733186   15770 pod_ready.go:86] duration metric: took 387.869243ms for pod "kube-controller-manager-addons-416077" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:39.933107   15770 pod_ready.go:83] waiting for pod "kube-proxy-zsqwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:40.332992   15770 pod_ready.go:94] pod "kube-proxy-zsqwg" is "Ready"
	I1227 19:56:40.333015   15770 pod_ready.go:86] duration metric: took 399.885446ms for pod "kube-proxy-zsqwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:40.532229   15770 pod_ready.go:83] waiting for pod "kube-scheduler-addons-416077" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:40.933294   15770 pod_ready.go:94] pod "kube-scheduler-addons-416077" is "Ready"
	I1227 19:56:40.933320   15770 pod_ready.go:86] duration metric: took 401.060758ms for pod "kube-scheduler-addons-416077" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:56:40.933334   15770 pod_ready.go:40] duration metric: took 1.603730888s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 19:56:40.974390   15770 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 19:56:40.976805   15770 out.go:179] * Done! kubectl is now configured to use "addons-416077" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 19:56:38 addons-416077 crio[771]: time="2025-12-27T19:56:38.734216382Z" level=info msg="Starting container: aa2a50393d9f2f98a9827c8b334390563c94960bf1bc3591297a590d2b69bcd9" id=dfc14829-be31-46c9-a797-11a942a81001 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 19:56:38 addons-416077 crio[771]: time="2025-12-27T19:56:38.736150805Z" level=info msg="Started container" PID=5941 containerID=aa2a50393d9f2f98a9827c8b334390563c94960bf1bc3591297a590d2b69bcd9 description=ingress-nginx/ingress-nginx-controller-7847b5c79c-8d7p7/controller id=dfc14829-be31-46c9-a797-11a942a81001 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0d851f5d1ab7ab423a05ed1c9da952539b30e3eb862e250c27a131472e7aa09
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.768077263Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8dfe5098-9f8a-4159-9ec6-76203239a5a8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.768152971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.77483603Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c399c4ea3eb1a599acecd20e992f34d33ef4861040c3e217b49049855a3c7b0e UID:98ee8156-0eab-46e8-83c8-92bb16e99805 NetNS:/var/run/netns/c62e000c-2c1e-49d8-aba9-df9c951e501d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ab70}] Aliases:map[]}"
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.774859827Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.783895413Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c399c4ea3eb1a599acecd20e992f34d33ef4861040c3e217b49049855a3c7b0e UID:98ee8156-0eab-46e8-83c8-92bb16e99805 NetNS:/var/run/netns/c62e000c-2c1e-49d8-aba9-df9c951e501d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ab70}] Aliases:map[]}"
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.784045695Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.784761956Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.785496895Z" level=info msg="Ran pod sandbox c399c4ea3eb1a599acecd20e992f34d33ef4861040c3e217b49049855a3c7b0e with infra container: default/busybox/POD" id=8dfe5098-9f8a-4159-9ec6-76203239a5a8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.786597882Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ed61d40b-c996-4f65-95b6-130c645e4c91 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.786731629Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ed61d40b-c996-4f65-95b6-130c645e4c91 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.786765462Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ed61d40b-c996-4f65-95b6-130c645e4c91 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.787462079Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8ffa22da-7fe5-48ce-9cc2-38c9e09960e3 name=/runtime.v1.ImageService/PullImage
	Dec 27 19:56:41 addons-416077 crio[771]: time="2025-12-27T19:56:41.788772838Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.325321171Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=8ffa22da-7fe5-48ce-9cc2-38c9e09960e3 name=/runtime.v1.ImageService/PullImage
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.325760213Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8ded4a1-64bc-4a77-ad63-688456356f35 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.327175455Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc93d143-ea54-4ba7-bbb8-03b65b2852e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.330320362Z" level=info msg="Creating container: default/busybox/busybox" id=d1cd8d7b-17f3-4071-ac3d-2f25a2d693da name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.33042583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.336464795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.337032888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.365715122Z" level=info msg="Created container df182b9f99a6531f30fe76e54082806fe07120ba2d957907a410a09716f31818: default/busybox/busybox" id=d1cd8d7b-17f3-4071-ac3d-2f25a2d693da name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.366212804Z" level=info msg="Starting container: df182b9f99a6531f30fe76e54082806fe07120ba2d957907a410a09716f31818" id=9b9fa20e-9ddf-429e-a00a-ff9c4bbfb6eb name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 19:56:42 addons-416077 crio[771]: time="2025-12-27T19:56:42.367837369Z" level=info msg="Started container" PID=6314 containerID=df182b9f99a6531f30fe76e54082806fe07120ba2d957907a410a09716f31818 description=default/busybox/busybox id=9b9fa20e-9ddf-429e-a00a-ff9c4bbfb6eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=c399c4ea3eb1a599acecd20e992f34d33ef4861040c3e217b49049855a3c7b0e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	df182b9f99a65       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   c399c4ea3eb1a       busybox                                     default
	aa2a50393d9f2       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             11 seconds ago       Running             controller                               0                   e0d851f5d1ab7       ingress-nginx-controller-7847b5c79c-8d7p7   ingress-nginx
	b8a7bd9235b6a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          15 seconds ago       Running             csi-snapshotter                          0                   b91acb95804bc       csi-hostpathplugin-fq7b4                    kube-system
	209dc9e84dfba       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          16 seconds ago       Running             csi-provisioner                          0                   b91acb95804bc       csi-hostpathplugin-fq7b4                    kube-system
	fd00ed25c985f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            17 seconds ago       Running             liveness-probe                           0                   b91acb95804bc       csi-hostpathplugin-fq7b4                    kube-system
	e3bf55084b947       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           17 seconds ago       Running             hostpath                                 0                   b91acb95804bc       csi-hostpathplugin-fq7b4                    kube-system
	6869a50a73d8d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                18 seconds ago       Running             node-driver-registrar                    0                   b91acb95804bc       csi-hostpathplugin-fq7b4                    kube-system
	c1c6e9e398177       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 19 seconds ago       Running             gcp-auth                                 0                   6f209d52e2eef       gcp-auth-5bbcf684b5-ml925                   gcp-auth
	d8d9bfe50178f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   20 seconds ago       Exited              patch                                    1                   a211e08d6a5db       ingress-nginx-admission-patch-cx4px         ingress-nginx
	fc83614bb6eb0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            20 seconds ago       Running             gadget                                   0                   386fe8260ef52       gadget-xz7sz                                gadget
	b1ec3b5e74d5f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago       Running             registry-proxy                           0                   bbab72a7de5b0       registry-proxy-k2x7x                        kube-system
	d88dbb28fc23c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     24 seconds ago       Running             amd-gpu-device-plugin                    0                   58fab4b70a82a       amd-gpu-device-plugin-qn65m                 kube-system
	0c66faa44bc4b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   25 seconds ago       Exited              create                                   0                   3d1d490f9592f       ingress-nginx-admission-create-nxjx8        ingress-nginx
	e30f65c78d03f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   26 seconds ago       Exited              patch                                    0                   bc4f391477f6b       gcp-auth-certs-patch-xkdpw                  gcp-auth
	40622d51884d5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   26 seconds ago       Running             csi-external-health-monitor-controller   0                   b91acb95804bc       csi-hostpathplugin-fq7b4                    kube-system
	0a91b3f02715c       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              27 seconds ago       Running             csi-resizer                              0                   aa716d1fb0058       csi-hostpath-resizer-0                      kube-system
	1a50bd427e4c7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      27 seconds ago       Running             volume-snapshot-controller               0                   3e0ac3e6ca490       snapshot-controller-6588d87457-6drpv        kube-system
	434ff2420d66b       ghcr.io/manusa/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                                  28 seconds ago       Running             yakd                                     0                   f4cc18f18f3c5       yakd-dashboard-865bfb49b9-c5n6k             yakd-dashboard
	d3a037a6b817d       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     30 seconds ago       Running             nvidia-device-plugin-ctr                 0                   2c8fa38a88fc5       nvidia-device-plugin-daemonset-vqxk8        kube-system
	23bfea2446ed1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             32 seconds ago       Running             csi-attacher                             0                   c3810beedd875       csi-hostpath-attacher-0                     kube-system
	b51a8722302db       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   34 seconds ago       Exited              create                                   0                   01cc2fcc3c946       gcp-auth-certs-create-tfbtt                 gcp-auth
	91336251bde31       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      34 seconds ago       Running             volume-snapshot-controller               0                   d66ed4a2888db       snapshot-controller-6588d87457-49mhg        kube-system
	ca648bdc045e5       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        35 seconds ago       Running             metrics-server                           0                   b685490a7d4d5       metrics-server-5778bb4788-m9rtg             kube-system
	5c6285e9513d1       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           37 seconds ago       Running             registry                                 0                   6b4a72972b2df       registry-788cd7d5bc-98b8s                   kube-system
	f73ed88e903dd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             38 seconds ago       Running             local-path-provisioner                   0                   d7b3c7a8c2c91       local-path-provisioner-c44bcd496-jvvx5      local-path-storage
	d59213dbca5d6       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               39 seconds ago       Running             cloud-spanner-emulator                   0                   42963d599836a       cloud-spanner-emulator-5649ccbc87-kgmdm     default
	19ba832dc0107       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               42 seconds ago       Running             minikube-ingress-dns                     0                   6ac0122cd4e49       kube-ingress-dns-minikube                   kube-system
	bb1c2025dc24f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                                             46 seconds ago       Running             coredns                                  0                   4b408e8512a98       coredns-7d764666f9-7l6tj                    kube-system
	3321a7f88f9d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             46 seconds ago       Running             storage-provisioner                      0                   3deaa2c0fd080       storage-provisioner                         kube-system
	1502dada11165       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           58 seconds ago       Running             kindnet-cni                              0                   568bf18964d4a       kindnet-g8dlg                               kube-system
	49d7f2a724dd3       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                                                             About a minute ago   Running             kube-proxy                               0                   5ffe30d7e8976       kube-proxy-zsqwg                            kube-system
	974b47f0c91a7       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                                                             About a minute ago   Running             kube-controller-manager                  0                   b7f76e3b7fded       kube-controller-manager-addons-416077       kube-system
	0260af8be5e3c       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                                                             About a minute ago   Running             kube-scheduler                           0                   7da3deadb3fbd       kube-scheduler-addons-416077                kube-system
	894722f2278ed       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                                                             About a minute ago   Running             kube-apiserver                           0                   b423588b38184       kube-apiserver-addons-416077                kube-system
	cf4602f54ae9b       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                                             About a minute ago   Running             etcd                                     0                   aa57513707f06       etcd-addons-416077                          kube-system
	
	
	==> coredns [bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16] <==
	[INFO] 10.244.0.19:34169 - 52971 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000153837s
	[INFO] 10.244.0.19:40424 - 3010 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010197s
	[INFO] 10.244.0.19:40424 - 2670 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133515s
	[INFO] 10.244.0.19:39612 - 55861 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.00005795s
	[INFO] 10.244.0.19:39612 - 55520 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000081157s
	[INFO] 10.244.0.19:54350 - 37400 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000057525s
	[INFO] 10.244.0.19:54350 - 37150 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000117255s
	[INFO] 10.244.0.19:55168 - 43236 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000034389s
	[INFO] 10.244.0.19:55168 - 43006 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000046227s
	[INFO] 10.244.0.19:52261 - 41331 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096945s
	[INFO] 10.244.0.19:52261 - 41133 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00011073s
	[INFO] 10.244.0.21:49328 - 38089 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178715s
	[INFO] 10.244.0.21:48423 - 19253 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000251628s
	[INFO] 10.244.0.21:49988 - 3624 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114273s
	[INFO] 10.244.0.21:43228 - 40846 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107706s
	[INFO] 10.244.0.21:45954 - 27943 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115883s
	[INFO] 10.244.0.21:50597 - 1614 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094473s
	[INFO] 10.244.0.21:47875 - 42534 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00402885s
	[INFO] 10.244.0.21:50660 - 7720 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004433009s
	[INFO] 10.244.0.21:34314 - 57573 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004278088s
	[INFO] 10.244.0.21:38833 - 3506 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004360334s
	[INFO] 10.244.0.21:41522 - 24470 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003865645s
	[INFO] 10.244.0.21:53230 - 22980 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004400259s
	[INFO] 10.244.0.21:57530 - 28566 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000980561s
	[INFO] 10.244.0.21:40313 - 65473 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00208605s
	
	
	==> describe nodes <==
	Name:               addons-416077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-416077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=addons-416077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T19_55_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-416077
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-416077"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 19:55:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-416077
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 19:56:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 19:56:45 +0000   Sat, 27 Dec 2025 19:55:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 19:56:45 +0000   Sat, 27 Dec 2025 19:55:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 19:56:45 +0000   Sat, 27 Dec 2025 19:55:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 19:56:45 +0000   Sat, 27 Dec 2025 19:56:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-416077
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                681480f6-bb03-4539-97a9-c327d8f343e9
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-5649ccbc87-kgmdm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gadget                      gadget-xz7sz                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  gcp-auth                    gcp-auth-5bbcf684b5-ml925                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-8d7p7    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         59s
	  kube-system                 amd-gpu-device-plugin-qn65m                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 coredns-7d764666f9-7l6tj                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     61s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 csi-hostpathplugin-fq7b4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 etcd-addons-416077                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         67s
	  kube-system                 kindnet-g8dlg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      61s
	  kube-system                 kube-apiserver-addons-416077                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-controller-manager-addons-416077        200m (2%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-zsqwg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-scheduler-addons-416077                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 metrics-server-5778bb4788-m9rtg              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         60s
	  kube-system                 nvidia-device-plugin-daemonset-vqxk8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 registry-788cd7d5bc-98b8s                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 registry-creds-567fb78d95-dwfd6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 registry-proxy-k2x7x                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 snapshot-controller-6588d87457-49mhg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 snapshot-controller-6588d87457-6drpv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  local-path-storage          local-path-provisioner-c44bcd496-jvvx5       0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  yakd-dashboard              yakd-dashboard-865bfb49b9-c5n6k              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  62s   node-controller  Node addons-416077 event: Registered Node addons-416077 in Controller
	
	
	==> dmesg <==
	[Dec27 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001882] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393120] i8042: Warning: Keylock active
	[  +0.020152] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501363] block sda: the capability attribute has been deprecated.
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95] <==
	{"level":"info","ts":"2025-12-27T19:55:40.716117Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T19:55:40.716153Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T19:55:40.716208Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-12-27T19:55:40.716238Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T19:55:40.716255Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T19:55:40.716821Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T19:55:40.716847Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T19:55:40.716866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-12-27T19:55:40.716875Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T19:55:40.717376Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T19:55:40.717807Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T19:55:40.717838Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T19:55:40.717803Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-416077 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T19:55:40.718082Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T19:55:40.718105Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T19:55:40.718225Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T19:55:40.718372Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T19:55:40.718433Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T19:55:40.718474Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T19:55:40.718569Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T19:55:40.719061Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T19:55:40.719137Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T19:55:40.722081Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T19:55:40.722128Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-27T19:56:25.759462Z","caller":"traceutil/trace.go:172","msg":"trace[671826041] transaction","detail":"{read_only:false; response_revision:1077; number_of_response:1; }","duration":"137.141399ms","start":"2025-12-27T19:56:25.622306Z","end":"2025-12-27T19:56:25.759447Z","steps":["trace[671826041] 'process raft request'  (duration: 137.020503ms)"],"step_count":1}
	
	
	==> gcp-auth [c1c6e9e3981774d8444f5609cfa6b95f61ca228373e0a652f82a393fef802a7b] <==
	2025/12/27 19:56:30 GCP Auth Webhook started!
	2025/12/27 19:56:41 Ready to marshal response ...
	2025/12/27 19:56:41 Ready to write response ...
	2025/12/27 19:56:41 Ready to marshal response ...
	2025/12/27 19:56:41 Ready to write response ...
	2025/12/27 19:56:41 Ready to marshal response ...
	2025/12/27 19:56:41 Ready to write response ...
	
	
	==> kernel <==
	 19:56:50 up 39 min,  0 user,  load average: 2.01, 0.84, 0.31
	Linux addons-416077 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e] <==
	I1227 19:55:52.254383       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 19:55:52.254677       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1227 19:55:52.254837       1 main.go:148] setting mtu 1500 for CNI 
	I1227 19:55:52.254859       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 19:55:52.254879       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T19:55:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 19:55:52.454406       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 19:55:52.454446       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 19:55:52.454478       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 19:55:52.454618       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 19:55:52.854669       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 19:55:52.854689       1 metrics.go:72] Registering metrics
	I1227 19:55:52.854731       1 controller.go:711] "Syncing nftables rules"
	I1227 19:56:02.455368       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:56:02.455434       1 main.go:301] handling current node
	I1227 19:56:12.454982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:56:12.455037       1 main.go:301] handling current node
	I1227 19:56:22.456255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:56:22.456296       1 main.go:301] handling current node
	I1227 19:56:32.454808       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:56:32.454843       1 main.go:301] handling current node
	I1227 19:56:42.455379       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:56:42.455425       1 main.go:301] handling current node
	
	
	==> kube-apiserver [894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601] <==
	W1227 19:56:02.804183       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.196.91:443: connect: connection refused
	E1227 19:56:02.804227       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.196.91:443: connect: connection refused" logger="UnhandledError"
	W1227 19:56:02.805442       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.196.91:443: connect: connection refused
	E1227 19:56:02.805477       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.196.91:443: connect: connection refused" logger="UnhandledError"
	W1227 19:56:02.823198       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.196.91:443: connect: connection refused
	E1227 19:56:02.823338       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.196.91:443: connect: connection refused" logger="UnhandledError"
	W1227 19:56:02.828567       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.196.91:443: connect: connection refused
	E1227 19:56:02.828602       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.196.91:443: connect: connection refused" logger="UnhandledError"
	W1227 19:56:15.778085       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 19:56:15.784587       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 19:56:15.797982       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 19:56:15.805635       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 19:56:16.175040       1 handler_proxy.go:99] no RequestInfo found in the context
	E1227 19:56:16.175109       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1227 19:56:16.175303       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.53.146:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.53.146:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.53.146:443: connect: connection refused" logger="UnhandledError"
	E1227 19:56:16.180105       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.53.146:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.53.146:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.53.146:443: connect: connection refused" logger="UnhandledError"
	E1227 19:56:16.182724       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.53.146:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.53.146:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.53.146:443: connect: connection refused" logger="UnhandledError"
	E1227 19:56:16.203909       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.53.146:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.53.146:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.53.146:443: connect: connection refused" logger="UnhandledError"
	I1227 19:56:16.272461       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1227 19:56:48.601597       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37266: use of closed network connection
	E1227 19:56:48.738937       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37290: use of closed network connection
	I1227 19:56:50.376050       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010] <==
	I1227 19:55:48.472617       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.472646       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.472670       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 19:55:48.472591       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.472698       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.472707       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.472607       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.472829       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.472831       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.473057       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.473067       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.475368       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 19:55:48.480042       1 range_allocator.go:433] "Set node PodCIDR" node="addons-416077" podCIDRs=["10.244.0.0/24"]
	I1227 19:55:48.484035       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.572301       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:48.572315       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 19:55:48.572319       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 19:55:48.575510       1 shared_informer.go:377] "Caches are synced"
	E1227 19:55:50.977215       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1227 19:56:03.474028       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1227 19:56:18.492015       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1227 19:56:18.492192       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 19:56:18.588757       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 19:56:18.592284       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:18.690320       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0] <==
	I1227 19:55:50.365993       1 server_linux.go:53] "Using iptables proxy"
	I1227 19:55:50.821069       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 19:55:50.921934       1 shared_informer.go:377] "Caches are synced"
	I1227 19:55:50.921971       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 19:55:50.922053       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 19:55:50.999117       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 19:55:50.999278       1 server_linux.go:136] "Using iptables Proxier"
	I1227 19:55:51.043083       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 19:55:51.056048       1 server.go:529] "Version info" version="v1.35.0"
	I1227 19:55:51.057441       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 19:55:51.060745       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 19:55:51.060805       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 19:55:51.060851       1 config.go:200] "Starting service config controller"
	I1227 19:55:51.060876       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 19:55:51.060876       1 config.go:309] "Starting node config controller"
	I1227 19:55:51.060982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 19:55:51.061024       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 19:55:51.060921       1 config.go:106] "Starting endpoint slice config controller"
	I1227 19:55:51.061610       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 19:55:51.161838       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 19:55:51.161870       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 19:55:51.161881       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87] <==
	E1227 19:55:41.683447       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 19:55:41.683496       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 19:55:41.683511       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 19:55:41.683508       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 19:55:41.683529       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 19:55:41.683574       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 19:55:41.683601       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 19:55:41.683702       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 19:55:41.683707       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 19:55:41.683849       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 19:55:41.683904       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 19:55:41.684267       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 19:55:41.684353       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 19:55:41.684659       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 19:55:41.684686       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 19:55:42.491433       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 19:55:42.566247       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 19:55:42.586994       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 19:55:42.596572       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 19:55:42.655358       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 19:55:42.681286       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 19:55:42.840620       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 19:55:42.852169       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 19:55:42.870816       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1227 19:55:44.677654       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 19:56:31 addons-416077 kubelet[1271]: E1227 19:56:31.221517    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-xz7sz" containerName="gadget"
	Dec 27 19:56:31 addons-416077 kubelet[1271]: I1227 19:56:31.244264    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gcp-auth/gcp-auth-5bbcf684b5-ml925" podStartSLOduration=22.753206293 podStartE2EDuration="34.244246595s" podCreationTimestamp="2025-12-27 19:55:57 +0000 UTC" firstStartedPulling="2025-12-27 19:56:19.393404757 +0000 UTC m=+35.471114066" lastFinishedPulling="2025-12-27 19:56:30.88444506 +0000 UTC m=+46.962154368" observedRunningTime="2025-12-27 19:56:31.24300473 +0000 UTC m=+47.320714048" watchObservedRunningTime="2025-12-27 19:56:31.244246595 +0000 UTC m=+47.321955908"
	Dec 27 19:56:31 addons-416077 kubelet[1271]: I1227 19:56:31.429420    1271 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/0664a227-abf2-46bb-bcb9-4184ac429a77-kube-api-access-h8qst\" (UniqueName: \"kubernetes.io/projected/0664a227-abf2-46bb-bcb9-4184ac429a77-kube-api-access-h8qst\") pod \"0664a227-abf2-46bb-bcb9-4184ac429a77\" (UID: \"0664a227-abf2-46bb-bcb9-4184ac429a77\") "
	Dec 27 19:56:31 addons-416077 kubelet[1271]: I1227 19:56:31.431868    1271 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0664a227-abf2-46bb-bcb9-4184ac429a77-kube-api-access-h8qst" pod "0664a227-abf2-46bb-bcb9-4184ac429a77" (UID: "0664a227-abf2-46bb-bcb9-4184ac429a77"). InnerVolumeSpecName "kube-api-access-h8qst". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 27 19:56:31 addons-416077 kubelet[1271]: I1227 19:56:31.530821    1271 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8qst\" (UniqueName: \"kubernetes.io/projected/0664a227-abf2-46bb-bcb9-4184ac429a77-kube-api-access-h8qst\") on node \"addons-416077\" DevicePath \"\""
	Dec 27 19:56:32 addons-416077 kubelet[1271]: I1227 19:56:32.228145    1271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a211e08d6a5dbd5872806e8a3e3c0b60815b370ca0886748dc33cf248111e746"
	Dec 27 19:56:32 addons-416077 kubelet[1271]: E1227 19:56:32.228555    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-xz7sz" containerName="gadget"
	Dec 27 19:56:33 addons-416077 kubelet[1271]: I1227 19:56:33.035350    1271 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 27 19:56:33 addons-416077 kubelet[1271]: I1227 19:56:33.035395    1271 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 27 19:56:33 addons-416077 kubelet[1271]: E1227 19:56:33.235499    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-xz7sz" containerName="gadget"
	Dec 27 19:56:34 addons-416077 kubelet[1271]: E1227 19:56:34.182286    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="yakd-dashboard/yakd-dashboard-865bfb49b9-c5n6k" containerName="yakd"
	Dec 27 19:56:34 addons-416077 kubelet[1271]: E1227 19:56:34.242567    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-xz7sz" containerName="gadget"
	Dec 27 19:56:34 addons-416077 kubelet[1271]: E1227 19:56:34.657731    1271 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 27 19:56:34 addons-416077 kubelet[1271]: E1227 19:56:34.657836    1271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/096babf5-8e2c-4bab-9054-a587a6f0d942-gcr-creds podName:096babf5-8e2c-4bab-9054-a587a6f0d942 nodeName:}" failed. No retries permitted until 2025-12-27 19:57:06.657818998 +0000 UTC m=+82.735528316 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/096babf5-8e2c-4bab-9054-a587a6f0d942-gcr-creds") pod "registry-creds-567fb78d95-dwfd6" (UID: "096babf5-8e2c-4bab-9054-a587a6f0d942") : secret "registry-creds-gcr" not found
	Dec 27 19:56:35 addons-416077 kubelet[1271]: E1227 19:56:35.250605    1271 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-fq7b4" containerName="hostpath"
	Dec 27 19:56:35 addons-416077 kubelet[1271]: I1227 19:56:35.264577    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-fq7b4" podStartSLOduration=2.02254583 podStartE2EDuration="33.264560362s" podCreationTimestamp="2025-12-27 19:56:02 +0000 UTC" firstStartedPulling="2025-12-27 19:56:03.241582678 +0000 UTC m=+19.319291975" lastFinishedPulling="2025-12-27 19:56:34.483597194 +0000 UTC m=+50.561306507" observedRunningTime="2025-12-27 19:56:35.262777004 +0000 UTC m=+51.340486339" watchObservedRunningTime="2025-12-27 19:56:35.264560362 +0000 UTC m=+51.342269679"
	Dec 27 19:56:36 addons-416077 kubelet[1271]: E1227 19:56:36.255826    1271 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-fq7b4" containerName="hostpath"
	Dec 27 19:56:39 addons-416077 kubelet[1271]: E1227 19:56:39.267819    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-8d7p7" containerName="controller"
	Dec 27 19:56:39 addons-416077 kubelet[1271]: I1227 19:56:39.278633    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-8d7p7" podStartSLOduration=44.640344381 podStartE2EDuration="48.278620271s" podCreationTimestamp="2025-12-27 19:55:51 +0000 UTC" firstStartedPulling="2025-12-27 19:56:35.055557116 +0000 UTC m=+51.133266425" lastFinishedPulling="2025-12-27 19:56:38.693833015 +0000 UTC m=+54.771542315" observedRunningTime="2025-12-27 19:56:39.277235035 +0000 UTC m=+55.354944368" watchObservedRunningTime="2025-12-27 19:56:39.278620271 +0000 UTC m=+55.356329588"
	Dec 27 19:56:40 addons-416077 kubelet[1271]: E1227 19:56:40.272058    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-8d7p7" containerName="controller"
	Dec 27 19:56:41 addons-416077 kubelet[1271]: I1227 19:56:41.615084    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/98ee8156-0eab-46e8-83c8-92bb16e99805-gcp-creds\") pod \"busybox\" (UID: \"98ee8156-0eab-46e8-83c8-92bb16e99805\") " pod="default/busybox"
	Dec 27 19:56:41 addons-416077 kubelet[1271]: I1227 19:56:41.615216    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7q94\" (UniqueName: \"kubernetes.io/projected/98ee8156-0eab-46e8-83c8-92bb16e99805-kube-api-access-t7q94\") pod \"busybox\" (UID: \"98ee8156-0eab-46e8-83c8-92bb16e99805\") " pod="default/busybox"
	Dec 27 19:56:43 addons-416077 kubelet[1271]: I1227 19:56:43.299906    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.7606651009999998 podStartE2EDuration="2.299888009s" podCreationTimestamp="2025-12-27 19:56:41 +0000 UTC" firstStartedPulling="2025-12-27 19:56:41.787188172 +0000 UTC m=+57.864897468" lastFinishedPulling="2025-12-27 19:56:42.326411067 +0000 UTC m=+58.404120376" observedRunningTime="2025-12-27 19:56:43.299075733 +0000 UTC m=+59.376785051" watchObservedRunningTime="2025-12-27 19:56:43.299888009 +0000 UTC m=+59.377597328"
	Dec 27 19:56:50 addons-416077 kubelet[1271]: I1227 19:56:50.003361    1271 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="83ec29e2-ec41-4248-bab7-198b97760175" path="/var/lib/kubelet/pods/83ec29e2-ec41-4248-bab7-198b97760175/volumes"
	Dec 27 19:56:50 addons-416077 kubelet[1271]: E1227 19:56:50.274728    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-8d7p7" containerName="controller"
	
	
	==> storage-provisioner [3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158] <==
	W1227 19:56:25.461489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:27.464349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:27.467834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:29.470759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:29.474689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:31.477540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:31.481005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:33.484898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:33.488616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:35.492677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:35.497167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:37.500402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:37.503810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:39.506166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:39.509449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:41.512653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:41.515966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:43.518716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:43.523285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:45.526027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:45.529202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:47.531424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:47.535141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:49.538289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:56:49.541700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-416077 -n addons-416077
helpers_test.go:270: (dbg) Run:  kubectl --context addons-416077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: nginx gcp-auth-certs-patch-xkdpw ingress-nginx-admission-create-nxjx8 ingress-nginx-admission-patch-cx4px registry-creds-567fb78d95-dwfd6
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-416077 describe pod nginx gcp-auth-certs-patch-xkdpw ingress-nginx-admission-create-nxjx8 ingress-nginx-admission-patch-cx4px registry-creds-567fb78d95-dwfd6
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-416077 describe pod nginx gcp-auth-certs-patch-xkdpw ingress-nginx-admission-create-nxjx8 ingress-nginx-admission-patch-cx4px registry-creds-567fb78d95-dwfd6: exit status 1 (61.94712ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-416077/192.168.49.2
	Start Time:       Sat, 27 Dec 2025 19:56:50 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          public.ecr.aws/nginx/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fjqsh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fjqsh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/nginx to addons-416077
	  Normal  Pulling    1s    kubelet            spec.containers{nginx}: Pulling image "public.ecr.aws/nginx/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-xkdpw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-nxjx8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cx4px" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-dwfd6" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-416077 describe pod nginx gcp-auth-certs-patch-xkdpw ingress-nginx-admission-create-nxjx8 ingress-nginx-admission-patch-cx4px registry-creds-567fb78d95-dwfd6: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable headlamp --alsologtostderr -v=1: exit status 11 (224.704086ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:56:51.199780   24773 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:56:51.199943   24773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:56:51.199953   24773 out.go:374] Setting ErrFile to fd 2...
	I1227 19:56:51.199958   24773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:56:51.200179   24773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:56:51.200410   24773 mustload.go:66] Loading cluster: addons-416077
	I1227 19:56:51.200750   24773 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:51.200774   24773 addons.go:622] checking whether the cluster is paused
	I1227 19:56:51.200867   24773 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:51.200880   24773 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:56:51.201247   24773 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:56:51.219821   24773 ssh_runner.go:195] Run: systemctl --version
	I1227 19:56:51.219876   24773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:56:51.235659   24773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:56:51.323559   24773 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:56:51.323629   24773 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:56:51.351634   24773 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:56:51.351659   24773 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:56:51.351663   24773 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:56:51.351666   24773 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:56:51.351669   24773 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:56:51.351673   24773 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:56:51.351676   24773 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:56:51.351678   24773 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:56:51.351681   24773 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:56:51.351689   24773 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:56:51.351693   24773 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:56:51.351695   24773 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:56:51.351698   24773 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:56:51.351701   24773 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:56:51.351704   24773 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:56:51.351716   24773 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:56:51.351729   24773 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:56:51.351736   24773 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:56:51.351739   24773 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:56:51.351741   24773 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:56:51.351746   24773 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:56:51.351751   24773 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:56:51.351753   24773 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:56:51.351756   24773 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:56:51.351759   24773 cri.go:96] found id: ""
	I1227 19:56:51.351807   24773 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:56:51.365908   24773 out.go:203] 
	W1227 19:56:51.367213   24773 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:56:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:56:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:56:51.367231   24773 out.go:285] * 
	* 
	W1227 19:56:51.367939   24773 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:56:51.369131   24773 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-kgmdm" [2db6980d-5f36-4ec7-95e0-fdfbbd6751c5] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003485622s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (236.509128ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:12.167739   27192 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:12.168072   27192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:12.168084   27192 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:12.168088   27192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:12.168287   27192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:12.168548   27192 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:12.168889   27192 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:12.168921   27192 addons.go:622] checking whether the cluster is paused
	I1227 19:57:12.169002   27192 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:12.169013   27192 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:12.169346   27192 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:12.186324   27192 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:12.186370   27192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:12.202706   27192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:12.296108   27192 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:12.296226   27192 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:12.329265   27192 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:12.329284   27192 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:12.329288   27192 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:12.329291   27192 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:12.329301   27192 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:12.329308   27192 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:12.329311   27192 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:12.329314   27192 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:12.329317   27192 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:12.329328   27192 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:12.329331   27192 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:12.329334   27192 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:12.329337   27192 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:12.329340   27192 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:12.329343   27192 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:12.329348   27192 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:12.329350   27192 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:12.329354   27192 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:12.329357   27192 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:12.329360   27192 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:12.329362   27192 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:12.329365   27192 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:12.329368   27192 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:12.329371   27192 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:12.329373   27192 cri.go:96] found id: ""
	I1227 19:57:12.329408   27192 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:12.343902   27192 out.go:203] 
	W1227 19:57:12.345058   27192 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:12.345082   27192 out.go:285] * 
	* 
	W1227 19:57:12.345784   27192 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:12.346831   27192 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-416077 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-416077 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-416077 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [37dbab95-1e77-400b-98f3-c5d45d3e8928] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [37dbab95-1e77-400b-98f3-c5d45d3e8928] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [37dbab95-1e77-400b-98f3-c5d45d3e8928] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003373488s
addons_test.go:969: (dbg) Run:  kubectl --context addons-416077 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 ssh "cat /opt/local-path-provisioner/pvc-4c1b453b-93dd-47d9-8600-556b872469b0_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-416077 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-416077 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (227.657962ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:16.940232   27609 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:16.940512   27609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:16.940522   27609 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:16.940526   27609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:16.940741   27609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:16.941071   27609 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:16.941436   27609 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:16.941458   27609 addons.go:622] checking whether the cluster is paused
	I1227 19:57:16.941563   27609 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:16.941587   27609 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:16.942034   27609 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:16.958858   27609 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:16.958909   27609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:16.975165   27609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:17.063069   27609 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:17.063168   27609 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:17.093006   27609 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:17.093032   27609 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:17.093036   27609 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:17.093040   27609 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:17.093042   27609 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:17.093047   27609 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:17.093050   27609 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:17.093052   27609 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:17.093055   27609 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:17.093068   27609 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:17.093071   27609 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:17.093074   27609 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:17.093076   27609 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:17.093079   27609 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:17.093082   27609 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:17.093089   27609 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:17.093092   27609 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:17.093096   27609 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:17.093099   27609 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:17.093102   27609 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:17.093105   27609 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:17.093108   27609 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:17.093111   27609 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:17.093114   27609 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:17.093117   27609 cri.go:96] found id: ""
	I1227 19:57:17.093167   27609 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:17.106988   27609 out.go:203] 
	W1227 19:57:17.108116   27609 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:17.108139   27609 out.go:285] * 
	* 
	W1227 19:57:17.108818   27609 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:17.109887   27609 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-vqxk8" [d519c5f2-43a2-436d-804a-b2d032e7076e] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008306056s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (245.315397ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:00.324495   26168 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:00.324776   26168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:00.324786   26168 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:00.324790   26168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:00.325006   26168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:00.325268   26168 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:00.325562   26168 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:00.325579   26168 addons.go:622] checking whether the cluster is paused
	I1227 19:57:00.325656   26168 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:00.325667   26168 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:00.326024   26168 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:00.343116   26168 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:00.343162   26168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:00.365581   26168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:00.454247   26168 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:00.454348   26168 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:00.487336   26168 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:00.487363   26168 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:00.487368   26168 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:00.487373   26168 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:00.487378   26168 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:00.487383   26168 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:00.487387   26168 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:00.487392   26168 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:00.487396   26168 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:00.487404   26168 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:00.487408   26168 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:00.487413   26168 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:00.487418   26168 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:00.487457   26168 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:00.487469   26168 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:00.487478   26168 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:00.487483   26168 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:00.487488   26168 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:00.487493   26168 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:00.487496   26168 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:00.487499   26168 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:00.487502   26168 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:00.487508   26168 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:00.487511   26168 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:00.487514   26168 cri.go:96] found id: ""
	I1227 19:57:00.487553   26168 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:00.501753   26168 out.go:203] 
	W1227 19:57:00.503136   26168 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:00.503153   26168 out.go:285] * 
	* 
	W1227 19:57:00.503943   26168 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:00.505077   26168 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-865bfb49b9-c5n6k" [cb99873b-463d-438b-9921-032e0dbe95b3] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003420476s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable yakd --alsologtostderr -v=1: exit status 11 (247.803807ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:06.919739   26784 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:06.920042   26784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:06.920052   26784 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:06.920056   26784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:06.920262   26784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:06.920493   26784 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:06.920857   26784 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:06.920887   26784 addons.go:622] checking whether the cluster is paused
	I1227 19:57:06.921035   26784 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:06.921053   26784 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:06.921566   26784 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:06.940944   26784 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:06.941019   26784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:06.960407   26784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:07.053644   26784 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:07.053710   26784 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:07.084623   26784 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:07.084643   26784 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:07.084649   26784 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:07.084655   26784 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:07.084660   26784 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:07.084667   26784 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:07.084671   26784 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:07.084677   26784 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:07.084680   26784 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:07.084699   26784 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:07.084708   26784 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:07.084713   26784 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:07.084722   26784 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:07.084727   26784 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:07.084735   26784 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:07.084742   26784 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:07.084748   26784 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:07.084754   26784 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:07.084762   26784 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:07.084767   26784 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:07.084772   26784 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:07.084780   26784 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:07.084784   26784 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:07.084792   26784 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:07.084797   26784 cri.go:96] found id: ""
	I1227 19:57:07.084842   26784 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:07.099076   26784 out.go:203] 
	W1227 19:57:07.100198   26784 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:07.100223   26784 out.go:285] * 
	* 
	W1227 19:57:07.101158   26784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:07.102131   26784 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-qn65m" [7b2a13a1-3c72-4975-9585-3d01f151c548] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.056515711s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-416077 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-416077 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (233.158655ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:06.624032   26679 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:06.624201   26679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:06.624217   26679 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:06.624224   26679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:06.624471   26679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:57:06.624747   26679 mustload.go:66] Loading cluster: addons-416077
	I1227 19:57:06.625087   26679 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:06.625113   26679 addons.go:622] checking whether the cluster is paused
	I1227 19:57:06.625225   26679 config.go:182] Loaded profile config "addons-416077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:06.625242   26679 host.go:66] Checking if "addons-416077" exists ...
	I1227 19:57:06.625598   26679 cli_runner.go:164] Run: docker container inspect addons-416077 --format={{.State.Status}}
	I1227 19:57:06.644224   26679 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:06.644283   26679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-416077
	I1227 19:57:06.661037   26679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/addons-416077/id_rsa Username:docker}
	I1227 19:57:06.752283   26679 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:06.752361   26679 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:06.780236   26679 cri.go:96] found id: "b8a7bd9235b6a4caf6c2c4f7c4b1baa79ec8c0c022ed5dcff15829e54b1c454a"
	I1227 19:57:06.780271   26679 cri.go:96] found id: "209dc9e84dfba91aa626a0ca2d1bd87c93031429570f17334be471bb3447e5ae"
	I1227 19:57:06.780276   26679 cri.go:96] found id: "fd00ed25c985f64803a9055462ba7c0c73bbd5fd4120bb886eae5a2175f1997d"
	I1227 19:57:06.780279   26679 cri.go:96] found id: "e3bf55084b947cd6994ef964c03369061f8969a19d8255a0896caa1ed088455c"
	I1227 19:57:06.780282   26679 cri.go:96] found id: "6869a50a73d8de32c3dc51943aaa7488db0b52f9e50210bfacfc403e4b31abc3"
	I1227 19:57:06.780287   26679 cri.go:96] found id: "b1ec3b5e74d5f6a51732db328530d3318b36f99c9aec5711665c9196b1e154d0"
	I1227 19:57:06.780292   26679 cri.go:96] found id: "d88dbb28fc23c0c83a0e365974483d1d1857e5586aef6314c2ff3127b10883b0"
	I1227 19:57:06.780296   26679 cri.go:96] found id: "40622d51884d5a6d8ee55df90ebc729b6a3c523fc333355803131505ff3eb966"
	I1227 19:57:06.780300   26679 cri.go:96] found id: "0a91b3f02715c922c8d3b01e11f8224661e1ef5acc846bd67345788d0ca606bd"
	I1227 19:57:06.780318   26679 cri.go:96] found id: "1a50bd427e4c7baae7f3c47b456accb85ebbdde4eab6a955edcabd615b794f9e"
	I1227 19:57:06.780327   26679 cri.go:96] found id: "d3a037a6b817dd11c5ec44e00936772f3188aa489158b49a9834b054d01c82c7"
	I1227 19:57:06.780330   26679 cri.go:96] found id: "23bfea2446ed16492bae9feadf79f1824dcfe4a849e48f5de8c800dbd990887c"
	I1227 19:57:06.780333   26679 cri.go:96] found id: "91336251bde316e5658d75f0212a65106cb58aa18bf9b1c9f4d3c56c5f26c8ff"
	I1227 19:57:06.780335   26679 cri.go:96] found id: "ca648bdc045e5c3e33cb4b4a17e50db157a0627a3c8b78c16aa0df32b6ac6a0f"
	I1227 19:57:06.780338   26679 cri.go:96] found id: "5c6285e9513d1d5dca2d137de3e5116c2afb944c760615277ecacea48b4df496"
	I1227 19:57:06.780350   26679 cri.go:96] found id: "19ba832dc0107404daa365e890cf11d99044af20216b50eeb6911292fc34aaa0"
	I1227 19:57:06.780355   26679 cri.go:96] found id: "bb1c2025dc24f7c70349970cd631d0d6533295514fad7908a0cb32e5a75c7b16"
	I1227 19:57:06.780360   26679 cri.go:96] found id: "3321a7f88f9d12626ac15d719b4297b42e6f8b07de2f8126dcbac0a1de31b158"
	I1227 19:57:06.780365   26679 cri.go:96] found id: "1502dada111653661c0e71a273490dd919cf35583ca9b21e2bc09c337661c35e"
	I1227 19:57:06.780368   26679 cri.go:96] found id: "49d7f2a724dd38975a39bb75f10f2c0994c7851ab44689d5ebc0a9b214f17cf0"
	I1227 19:57:06.780373   26679 cri.go:96] found id: "974b47f0c91a71d1e2e051d35f13cc1107f3c144323412c0713a463400700010"
	I1227 19:57:06.780378   26679 cri.go:96] found id: "0260af8be5e3cd213383de25d2bae0781b2931284e151ca5e6c8d49dad444e87"
	I1227 19:57:06.780381   26679 cri.go:96] found id: "894722f2278ed9a7a94031fcca594d94b7f3435e0dfae78ff810a233ffb75601"
	I1227 19:57:06.780384   26679 cri.go:96] found id: "cf4602f54ae9bc8b2fbe9e14c800a848f78488296da5de81535f76b847b75c95"
	I1227 19:57:06.780391   26679 cri.go:96] found id: ""
	I1227 19:57:06.780443   26679 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:06.793405   26679 out.go:203] 
	W1227 19:57:06.794712   26679 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:06.794738   26679 out.go:285] * 
	* 
	W1227 19:57:06.795749   26679 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:06.796871   26679 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-416077 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.29s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.22s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-638726 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-638726 --output=json --user=testUser: exit status 80 (2.217767799s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b867e9f-15fb-449f-8288-7b16d7829dc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-638726 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"1c00099c-8de9-4526-ba23-608f31dafa13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T20:08:30Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"51e75bf5-7b4f-40be-87d4-8262d6550d26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-638726 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.8s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-638726 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-638726 --output=json --user=testUser: exit status 80 (1.796740577s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6a050099-afe5-4705-927c-bd65a949dfd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-638726 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"627687d6-7267-419a-b010-5f2822f0a8c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T20:08:32Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"1fedd9f0-ecfd-44e3-ba28-c0757bd68bb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-638726 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.80s)

                                                
                                    
x
+
TestPause/serial/Pause (5.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-260501 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-260501 --alsologtostderr -v=5: exit status 80 (2.695803854s)

                                                
                                                
-- stdout --
	* Pausing node pause-260501 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:18:53.769729  188857 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:18:53.769997  188857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:18:53.770008  188857 out.go:374] Setting ErrFile to fd 2...
	I1227 20:18:53.770013  188857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:18:53.770193  188857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:18:53.770428  188857 out.go:368] Setting JSON to false
	I1227 20:18:53.770445  188857 mustload.go:66] Loading cluster: pause-260501
	I1227 20:18:53.770801  188857 config.go:182] Loaded profile config "pause-260501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:18:53.771164  188857 cli_runner.go:164] Run: docker container inspect pause-260501 --format={{.State.Status}}
	I1227 20:18:53.789411  188857 host.go:66] Checking if "pause-260501" exists ...
	I1227 20:18:53.789689  188857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:18:53.843518  188857 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:93 SystemTime:2025-12-27 20:18:53.8338642 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:18:53.844172  188857 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-260501 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:18:54.127011  188857 out.go:179] * Pausing node pause-260501 ... 
	I1227 20:18:54.146900  188857 host.go:66] Checking if "pause-260501" exists ...
	I1227 20:18:54.147202  188857 ssh_runner.go:195] Run: systemctl --version
	I1227 20:18:54.147245  188857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-260501
	I1227 20:18:54.165945  188857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/pause-260501/id_rsa Username:docker}
	I1227 20:18:54.254346  188857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:18:54.284359  188857 pause.go:52] kubelet running: true
	I1227 20:18:54.284461  188857 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:18:54.455064  188857 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:18:54.455147  188857 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:18:54.528680  188857 cri.go:96] found id: "cf03e8762effdfc8615d0825fc25c76bf309030bca6f5b5dba1cf2bc45ab7b9b"
	I1227 20:18:54.528702  188857 cri.go:96] found id: "dd9c37b6e0ef0a6ff5fee1507e6965f442c627d3752e5bb000e23710ed200f10"
	I1227 20:18:54.528708  188857 cri.go:96] found id: "491ed3cccbbbff35a9a2b5d6c4d3c080dde63f3a05f92d6a4d4c641e33138251"
	I1227 20:18:54.528712  188857 cri.go:96] found id: "0eb10b2c1353283b73e89aab5227babfbab9aec10d971eb4c5585bc81670d996"
	I1227 20:18:54.528717  188857 cri.go:96] found id: "8cce63881e55d69c5de5a7bdb3a3b915ce0f32cc79d339c7c36adb6f4364da00"
	I1227 20:18:54.528721  188857 cri.go:96] found id: "4387c47adc01cbe0fb87a870b9c03c95144124b9028676daacbb7207b13bb5fa"
	I1227 20:18:54.528725  188857 cri.go:96] found id: "f17d36ee3c40f02c7a5e2155ee2af467bf6e970c949b40552c24e9b25b0187ba"
	I1227 20:18:54.528728  188857 cri.go:96] found id: ""
	I1227 20:18:54.528777  188857 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:18:54.541832  188857 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:18:54Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:18:54.826110  188857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:18:54.841658  188857 pause.go:52] kubelet running: false
	I1227 20:18:54.841708  188857 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:18:54.998489  188857 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:18:54.998589  188857 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:18:55.084383  188857 cri.go:96] found id: "cf03e8762effdfc8615d0825fc25c76bf309030bca6f5b5dba1cf2bc45ab7b9b"
	I1227 20:18:55.084410  188857 cri.go:96] found id: "dd9c37b6e0ef0a6ff5fee1507e6965f442c627d3752e5bb000e23710ed200f10"
	I1227 20:18:55.084416  188857 cri.go:96] found id: "491ed3cccbbbff35a9a2b5d6c4d3c080dde63f3a05f92d6a4d4c641e33138251"
	I1227 20:18:55.084421  188857 cri.go:96] found id: "0eb10b2c1353283b73e89aab5227babfbab9aec10d971eb4c5585bc81670d996"
	I1227 20:18:55.084426  188857 cri.go:96] found id: "8cce63881e55d69c5de5a7bdb3a3b915ce0f32cc79d339c7c36adb6f4364da00"
	I1227 20:18:55.084430  188857 cri.go:96] found id: "4387c47adc01cbe0fb87a870b9c03c95144124b9028676daacbb7207b13bb5fa"
	I1227 20:18:55.084434  188857 cri.go:96] found id: "f17d36ee3c40f02c7a5e2155ee2af467bf6e970c949b40552c24e9b25b0187ba"
	I1227 20:18:55.084438  188857 cri.go:96] found id: ""
	I1227 20:18:55.084480  188857 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:18:55.377718  188857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:18:55.390373  188857 pause.go:52] kubelet running: false
	I1227 20:18:55.390435  188857 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:18:55.502713  188857 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:18:55.502779  188857 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:18:55.563311  188857 cri.go:96] found id: "cf03e8762effdfc8615d0825fc25c76bf309030bca6f5b5dba1cf2bc45ab7b9b"
	I1227 20:18:55.563330  188857 cri.go:96] found id: "dd9c37b6e0ef0a6ff5fee1507e6965f442c627d3752e5bb000e23710ed200f10"
	I1227 20:18:55.563334  188857 cri.go:96] found id: "491ed3cccbbbff35a9a2b5d6c4d3c080dde63f3a05f92d6a4d4c641e33138251"
	I1227 20:18:55.563337  188857 cri.go:96] found id: "0eb10b2c1353283b73e89aab5227babfbab9aec10d971eb4c5585bc81670d996"
	I1227 20:18:55.563340  188857 cri.go:96] found id: "8cce63881e55d69c5de5a7bdb3a3b915ce0f32cc79d339c7c36adb6f4364da00"
	I1227 20:18:55.563343  188857 cri.go:96] found id: "4387c47adc01cbe0fb87a870b9c03c95144124b9028676daacbb7207b13bb5fa"
	I1227 20:18:55.563345  188857 cri.go:96] found id: "f17d36ee3c40f02c7a5e2155ee2af467bf6e970c949b40552c24e9b25b0187ba"
	I1227 20:18:55.563348  188857 cri.go:96] found id: ""
	I1227 20:18:55.563406  188857 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:18:56.210448  188857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:18:56.222925  188857 pause.go:52] kubelet running: false
	I1227 20:18:56.222983  188857 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:18:56.329823  188857 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:18:56.329899  188857 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:18:56.392541  188857 cri.go:96] found id: "cf03e8762effdfc8615d0825fc25c76bf309030bca6f5b5dba1cf2bc45ab7b9b"
	I1227 20:18:56.392567  188857 cri.go:96] found id: "dd9c37b6e0ef0a6ff5fee1507e6965f442c627d3752e5bb000e23710ed200f10"
	I1227 20:18:56.392573  188857 cri.go:96] found id: "491ed3cccbbbff35a9a2b5d6c4d3c080dde63f3a05f92d6a4d4c641e33138251"
	I1227 20:18:56.392581  188857 cri.go:96] found id: "0eb10b2c1353283b73e89aab5227babfbab9aec10d971eb4c5585bc81670d996"
	I1227 20:18:56.392585  188857 cri.go:96] found id: "8cce63881e55d69c5de5a7bdb3a3b915ce0f32cc79d339c7c36adb6f4364da00"
	I1227 20:18:56.392588  188857 cri.go:96] found id: "4387c47adc01cbe0fb87a870b9c03c95144124b9028676daacbb7207b13bb5fa"
	I1227 20:18:56.392591  188857 cri.go:96] found id: "f17d36ee3c40f02c7a5e2155ee2af467bf6e970c949b40552c24e9b25b0187ba"
	I1227 20:18:56.392594  188857 cri.go:96] found id: ""
	I1227 20:18:56.392629  188857 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:18:56.405403  188857 out.go:203] 
	W1227 20:18:56.406435  188857 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:18:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:18:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:18:56.406464  188857 out.go:285] * 
	* 
	W1227 20:18:56.408005  188857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:18:56.408942  188857 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-260501 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-260501
helpers_test.go:244: (dbg) docker inspect pause-260501:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef",
	        "Created": "2025-12-27T20:18:08.236880261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 172832,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:18:08.750685284Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef/hosts",
	        "LogPath": "/var/lib/docker/containers/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef-json.log",
	        "Name": "/pause-260501",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-260501:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-260501",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef",
	                "LowerDir": "/var/lib/docker/overlay2/879cbdbd5a4977584a35cb61897e185f8a8227fe572b0e349e527c9eafda72ce-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/879cbdbd5a4977584a35cb61897e185f8a8227fe572b0e349e527c9eafda72ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/879cbdbd5a4977584a35cb61897e185f8a8227fe572b0e349e527c9eafda72ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/879cbdbd5a4977584a35cb61897e185f8a8227fe572b0e349e527c9eafda72ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-260501",
	                "Source": "/var/lib/docker/volumes/pause-260501/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-260501",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-260501",
	                "name.minikube.sigs.k8s.io": "pause-260501",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a8093a7066c3fb95641d14ad577c93ce8e14d3ac819cf5ca39ac9c00646ce383",
	            "SandboxKey": "/var/run/docker/netns/a8093a7066c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-260501": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0469b276cb64b2333ed97422179f2b2e80ce4dc2ac593125d2c72990acc87671",
	                    "EndpointID": "79518ff8a1b8cd37e3512bab3d282854e7eaab282204ac9832740ed1f237d7ee",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "fe:d8:05:04:6d:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-260501",
	                        "73cc2f8d3fab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-260501 -n pause-260501
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-260501 -n pause-260501: exit status 2 (316.708185ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-260501 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-639699 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --cancel-scheduled                                                                                              │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │ 27 Dec 25 20:16 UTC │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │ 27 Dec 25 20:17 UTC │
	│ delete  │ -p scheduled-stop-639699                                                                                                                 │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:17 UTC │
	│ start   │ -p insufficient-storage-352558 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-352558 │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │                     │
	│ delete  │ -p insufficient-storage-352558                                                                                                           │ insufficient-storage-352558 │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:17 UTC │
	│ start   │ -p pause-260501 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-260501                │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p offline-crio-240096 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-240096         │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p force-systemd-env-287564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-287564    │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p stopped-upgrade-379247 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-379247      │ jenkins │ v1.35.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	│ delete  │ -p force-systemd-env-287564                                                                                                              │ force-systemd-env-287564    │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p missing-upgrade-167772 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-167772      │ jenkins │ v1.35.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ stop    │ stopped-upgrade-379247 stop                                                                                                              │ stopped-upgrade-379247      │ jenkins │ v1.35.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p stopped-upgrade-379247 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-379247      │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │                     │
	│ start   │ -p pause-260501 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-260501                │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ delete  │ -p offline-crio-240096                                                                                                                   │ offline-crio-240096         │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-498227   │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │                     │
	│ pause   │ -p pause-260501 --alsologtostderr -v=5                                                                                                   │ pause-260501                │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │                     │
	│ start   │ -p missing-upgrade-167772 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-167772      │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:18:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:18:54.821372  189369 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:18:54.821650  189369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:18:54.821660  189369 out.go:374] Setting ErrFile to fd 2...
	I1227 20:18:54.821664  189369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:18:54.821862  189369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:18:54.822291  189369 out.go:368] Setting JSON to false
	I1227 20:18:54.823396  189369 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3684,"bootTime":1766863051,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:18:54.823450  189369 start.go:143] virtualization: kvm guest
	I1227 20:18:54.826866  189369 out.go:179] * [missing-upgrade-167772] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:18:54.828179  189369 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:18:54.828226  189369 notify.go:221] Checking for updates...
	I1227 20:18:54.831564  189369 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:18:54.833294  189369 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:18:54.835674  189369 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:18:54.836995  189369 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:18:54.838322  189369 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:18:54.840035  189369 config.go:182] Loaded profile config "missing-upgrade-167772": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 20:18:54.843003  189369 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 20:18:54.844275  189369 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:18:54.872226  189369 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:18:54.872390  189369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:18:54.938531  189369 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:18:54.928434134 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:18:54.938634  189369 docker.go:319] overlay module found
	I1227 20:18:54.940436  189369 out.go:179] * Using the docker driver based on existing profile
	I1227 20:18:54.941691  189369 start.go:309] selected driver: docker
	I1227 20:18:54.941709  189369 start.go:928] validating driver "docker" against &{Name:missing-upgrade-167772 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-167772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:18:54.941802  189369 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:18:54.942484  189369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:18:55.010033  189369 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:18:55.000198359 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:18:55.010302  189369 cni.go:84] Creating CNI manager for ""
	I1227 20:18:55.010388  189369 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:18:55.010451  189369 start.go:353] cluster config:
	{Name:missing-upgrade-167772 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-167772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:18:55.012247  189369 out.go:179] * Starting "missing-upgrade-167772" primary control-plane node in "missing-upgrade-167772" cluster
	I1227 20:18:55.013342  189369 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:18:55.014598  189369 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:18:55.015647  189369 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 20:18:55.015682  189369 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:18:55.015705  189369 cache.go:65] Caching tarball of preloaded images
	I1227 20:18:55.015745  189369 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1227 20:18:55.015819  189369 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:18:55.015830  189369 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1227 20:18:55.015970  189369 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/missing-upgrade-167772/config.json ...
	I1227 20:18:55.043802  189369 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1227 20:18:55.043828  189369 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1227 20:18:55.043849  189369 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:18:55.043968  189369 start.go:360] acquireMachinesLock for missing-upgrade-167772: {Name:mkc8b5e19e943cc83a6dc66547859d93cd64f2cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:18:55.044080  189369 start.go:364] duration metric: took 75.923µs to acquireMachinesLock for "missing-upgrade-167772"
	I1227 20:18:55.044101  189369 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:18:55.044108  189369 fix.go:54] fixHost starting: 
	I1227 20:18:55.044492  189369 cli_runner.go:164] Run: docker container inspect missing-upgrade-167772 --format={{.State.Status}}
	W1227 20:18:55.067315  189369 cli_runner.go:211] docker container inspect missing-upgrade-167772 --format={{.State.Status}} returned with exit code 1
	I1227 20:18:55.067376  189369 fix.go:112] recreateIfNeeded on missing-upgrade-167772: state= err=unknown state "missing-upgrade-167772": docker container inspect missing-upgrade-167772 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167772
	I1227 20:18:55.067397  189369 fix.go:117] machineExists: false. err=machine does not exist
	I1227 20:18:55.068837  189369 out.go:179] * docker "missing-upgrade-167772" container is missing, will recreate.
	I1227 20:18:50.648374  188109 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:18:50.648669  188109 start.go:159] libmachine.API.Create for "kubernetes-upgrade-498227" (driver="docker")
	I1227 20:18:50.648705  188109 client.go:173] LocalClient.Create starting
	I1227 20:18:50.648778  188109 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem
	I1227 20:18:50.648825  188109 main.go:144] libmachine: Decoding PEM data...
	I1227 20:18:50.648856  188109 main.go:144] libmachine: Parsing certificate...
	I1227 20:18:50.648955  188109 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem
	I1227 20:18:50.648988  188109 main.go:144] libmachine: Decoding PEM data...
	I1227 20:18:50.649009  188109 main.go:144] libmachine: Parsing certificate...
	I1227 20:18:50.649418  188109 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-498227 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:18:50.681858  188109 cli_runner.go:211] docker network inspect kubernetes-upgrade-498227 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:18:50.682097  188109 network_create.go:284] running [docker network inspect kubernetes-upgrade-498227] to gather additional debugging logs...
	I1227 20:18:50.682128  188109 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-498227
	W1227 20:18:50.701374  188109 cli_runner.go:211] docker network inspect kubernetes-upgrade-498227 returned with exit code 1
	I1227 20:18:50.701412  188109 network_create.go:287] error running [docker network inspect kubernetes-upgrade-498227]: docker network inspect kubernetes-upgrade-498227: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-498227 not found
	I1227 20:18:50.701429  188109 network_create.go:289] output of [docker network inspect kubernetes-upgrade-498227]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-498227 not found
	
	** /stderr **
	I1227 20:18:50.701581  188109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:18:50.721770  188109 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0db0ba8938bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:24:76:f1:9a:26} reservation:<nil>}
	I1227 20:18:50.722348  188109 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11f8d597a005 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:b4:6c:7e:ff:91} reservation:<nil>}
	I1227 20:18:50.722856  188109 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7cf3350a110 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:14:0b:19:b4:4d} reservation:<nil>}
	I1227 20:18:50.723412  188109 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0469b276cb64 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:98:2f:52:67:89} reservation:<nil>}
	I1227 20:18:50.724111  188109 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-abdd7ab6beb8 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:52:de:b3:2a:fe:a8} reservation:<nil>}
	I1227 20:18:50.724856  188109 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fa76a0}
	I1227 20:18:50.724888  188109 network_create.go:124] attempt to create docker network kubernetes-upgrade-498227 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1227 20:18:50.724951  188109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-498227 kubernetes-upgrade-498227
	I1227 20:18:50.775733  188109 network_create.go:108] docker network kubernetes-upgrade-498227 192.168.94.0/24 created
	I1227 20:18:50.775769  188109 kic.go:121] calculated static IP "192.168.94.2" for the "kubernetes-upgrade-498227" container
	I1227 20:18:50.775833  188109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:18:50.794869  188109 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-498227 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-498227 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:18:50.812191  188109 oci.go:103] Successfully created a docker volume kubernetes-upgrade-498227
	I1227 20:18:50.812261  188109 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-498227-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-498227 --entrypoint /usr/bin/test -v kubernetes-upgrade-498227:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:18:51.209180  188109 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-498227
	I1227 20:18:51.209262  188109 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 20:18:51.209281  188109 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:18:51.209351  188109 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-498227:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:18:54.401694  188109 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-498227:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.1922302s)
	I1227 20:18:54.401748  188109 kic.go:203] duration metric: took 3.192462199s to extract preloaded images to volume ...
	W1227 20:18:54.401890  188109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 20:18:54.401969  188109 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 20:18:54.402037  188109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:18:54.462132  188109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-498227 --name kubernetes-upgrade-498227 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-498227 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-498227 --network kubernetes-upgrade-498227 --ip 192.168.94.2 --volume kubernetes-upgrade-498227:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:18:54.799459  188109 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-498227 --format={{.State.Running}}
	I1227 20:18:54.819516  188109 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-498227 --format={{.State.Status}}
	I1227 20:18:54.839718  188109 cli_runner.go:164] Run: docker exec kubernetes-upgrade-498227 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:18:54.904211  188109 oci.go:144] the created container "kubernetes-upgrade-498227" has a running status.
	I1227 20:18:54.904246  188109 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/kubernetes-upgrade-498227/id_rsa...
	I1227 20:18:54.950676  188109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-10897/.minikube/machines/kubernetes-upgrade-498227/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:18:54.987839  188109 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-498227 --format={{.State.Status}}
	I1227 20:18:55.011266  188109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:18:55.011286  188109 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-498227 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:18:55.063935  188109 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-498227 --format={{.State.Status}}
	I1227 20:18:55.086213  188109 machine.go:94] provisionDockerMachine start ...
	I1227 20:18:55.086323  188109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-498227
	I1227 20:18:55.107412  188109 main.go:144] libmachine: Using SSH client type: native
	I1227 20:18:55.107939  188109 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1227 20:18:55.107962  188109 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:18:55.108781  188109 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39196->127.0.0.1:32993: read: connection reset by peer
	I1227 20:18:55.130995  183013 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 20:18:55.131042  183013 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.15095266Z" level=info msg="RDT not available in the host system"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.150966864Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.15193319Z" level=info msg="Conmon does support the --sync option"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.151952758Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.151968374Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.152892114Z" level=info msg="Conmon does support the --sync option"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.152920716Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.157520101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.157555414Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.158293845Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.159104393Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.159170481Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.248400793Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-zsngz Namespace:kube-system ID:bb78db73bcb71148030dbc3cf14c5a68a41f2d7d5e78a13e49c97e09e34f5ce1 UID:10d76dd3-476a-444b-9a7a-9aafadc0f4c6 NetNS:/var/run/netns/2d940d51-9b5f-4413-9e61-bc6cc173eb2a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00066e138}] Aliases:map[]}"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.248672319Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-zsngz for CNI network kindnet (type=ptp)"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.24926871Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249296467Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.2493434Z" level=info msg="Create NRI interface"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249459807Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249478884Z" level=info msg="runtime interface created"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249493435Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249502012Z" level=info msg="runtime interface starting up..."
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249509452Z" level=info msg="starting plugins..."
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249534958Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.250060217Z" level=info msg="No systemd watchdog enabled"
	Dec 27 20:18:50 pause-260501 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	cf03e8762effd       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     12 seconds ago      Running             coredns                   0                   bb78db73bcb71       coredns-7d764666f9-zsngz               kube-system
	dd9c37b6e0ef0       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   23 seconds ago      Running             kindnet-cni               0                   7a786a3aada94       kindnet-6b2pm                          kube-system
	491ed3cccbbbf       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     27 seconds ago      Running             kube-proxy                0                   cc09814536e36       kube-proxy-clszm                       kube-system
	0eb10b2c13532       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     37 seconds ago      Running             kube-controller-manager   0                   0520fdbb3f5ee       kube-controller-manager-pause-260501   kube-system
	8cce63881e55d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     37 seconds ago      Running             etcd                      0                   2f1ef9ceb1542       etcd-pause-260501                      kube-system
	4387c47adc01c       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     37 seconds ago      Running             kube-scheduler            0                   23c88973c41eb       kube-scheduler-pause-260501            kube-system
	f17d36ee3c40f       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     37 seconds ago      Running             kube-apiserver            0                   1a730cda3f421       kube-apiserver-pause-260501            kube-system
	
	
	==> coredns [cf03e8762effdfc8615d0825fc25c76bf309030bca6f5b5dba1cf2bc45ab7b9b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36813 - 59749 "HINFO IN 2477704941545386971.658493180491937193. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.08444164s
	
	
	==> describe nodes <==
	Name:               pause-260501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-260501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=pause-260501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_18_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:18:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-260501
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:18:44 +0000   Sat, 27 Dec 2025 20:18:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:18:44 +0000   Sat, 27 Dec 2025 20:18:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:18:44 +0000   Sat, 27 Dec 2025 20:18:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:18:44 +0000   Sat, 27 Dec 2025 20:18:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-260501
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                1d3de4f9-9d41-4903-b62c-e950bde49614
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-zsngz                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-260501                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-6b2pm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-260501             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-260501    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-clszm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-260501             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node pause-260501 event: Registered Node pause-260501 in Controller
	
	
	==> dmesg <==
	[Dec27 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001882] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393120] i8042: Warning: Keylock active
	[  +0.020152] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501363] block sda: the capability attribute has been deprecated.
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [8cce63881e55d69c5de5a7bdb3a3b915ce0f32cc79d339c7c36adb6f4364da00] <==
	{"level":"info","ts":"2025-12-27T20:18:20.565756Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:18:20.565790Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:18:20.565814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:18:20.565831Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:18:20.566376Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:18:20.566895Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:18:20.566980Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:18:20.567044Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:18:20.566895Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-260501 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:18:20.567141Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:18:20.567177Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:18:20.567242Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:18:20.567275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:18:20.567224Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:18:20.567365Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T20:18:20.568347Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:18:20.568547Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:18:20.571626Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:18:20.572107Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:18:33.197757Z","caller":"traceutil/trace.go:172","msg":"trace[1020165451] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"101.90908ms","start":"2025-12-27T20:18:33.095827Z","end":"2025-12-27T20:18:33.197736Z","steps":["trace[1020165451] 'process raft request'  (duration: 101.768671ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:18:33.348689Z","caller":"traceutil/trace.go:172","msg":"trace[152766417] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"143.213872ms","start":"2025-12-27T20:18:33.205455Z","end":"2025-12-27T20:18:33.348669Z","steps":["trace[152766417] 'process raft request'  (duration: 120.04293ms)","trace[152766417] 'compare'  (duration: 22.972289ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:18:33.936128Z","caller":"traceutil/trace.go:172","msg":"trace[2115028874] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"157.27219ms","start":"2025-12-27T20:18:33.778836Z","end":"2025-12-27T20:18:33.936108Z","steps":["trace[2115028874] 'process raft request'  (duration: 157.138551ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:18:34.577687Z","caller":"traceutil/trace.go:172","msg":"trace[1835604228] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"140.220729ms","start":"2025-12-27T20:18:34.437448Z","end":"2025-12-27T20:18:34.577669Z","steps":["trace[1835604228] 'process raft request'  (duration: 119.555366ms)","trace[1835604228] 'compare'  (duration: 20.543845ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T20:18:53.177578Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.948309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-260501\" limit:1 ","response":"range_response_count:1 size:4783"}
	{"level":"info","ts":"2025-12-27T20:18:53.177685Z","caller":"traceutil/trace.go:172","msg":"trace[611882997] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-260501; range_end:; response_count:1; response_revision:409; }","duration":"108.072685ms","start":"2025-12-27T20:18:53.069596Z","end":"2025-12-27T20:18:53.177668Z","steps":["trace[611882997] 'range keys from in-memory index tree'  (duration: 107.787109ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:18:57 up  1:01,  0 user,  load average: 3.93, 1.97, 1.38
	Linux pause-260501 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dd9c37b6e0ef0a6ff5fee1507e6965f442c627d3752e5bb000e23710ed200f10] <==
	I1227 20:18:34.273668       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:18:34.274013       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:18:34.274179       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:18:34.274207       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:18:34.274249       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:18:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:18:34.476875       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:18:34.476902       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:18:34.476938       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:18:34.477089       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:18:34.778069       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:18:34.778106       1 metrics.go:72] Registering metrics
	I1227 20:18:34.778166       1 controller.go:711] "Syncing nftables rules"
	I1227 20:18:44.477063       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:18:44.477142       1 main.go:301] handling current node
	I1227 20:18:54.485022       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:18:54.485059       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f17d36ee3c40f02c7a5e2155ee2af467bf6e970c949b40552c24e9b25b0187ba] <==
	I1227 20:18:21.945131       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:18:21.945153       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:18:21.945167       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1227 20:18:21.946100       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:18:21.951022       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:18:21.951088       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:18:21.957294       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:18:22.140486       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:18:22.835396       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:18:22.838748       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:18:22.838765       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:18:23.286181       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:18:23.321468       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:18:23.437480       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:18:23.444248       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 20:18:23.445307       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:18:23.449169       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:18:23.855944       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:18:24.463622       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:18:24.472666       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:18:24.480614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:18:29.314677       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:18:29.320353       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:18:29.408266       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:18:29.857277       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0eb10b2c1353283b73e89aab5227babfbab9aec10d971eb4c5585bc81670d996] <==
	I1227 20:18:28.661387       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661395       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661782       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661791       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661799       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661808       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.662795       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.662850       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.662874       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.662878       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.663171       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.664100       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.664147       1 range_allocator.go:177] "Sending events to api server"
	I1227 20:18:28.664178       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:18:28.664191       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:18:28.664196       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.664938       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.665092       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.665966       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:18:28.672937       1 range_allocator.go:433] "Set node PodCIDR" node="pause-260501" podCIDRs=["10.244.0.0/24"]
	I1227 20:18:28.764203       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.764223       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:18:28.764230       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:18:28.766558       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:48.662015       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [491ed3cccbbbff35a9a2b5d6c4d3c080dde63f3a05f92d6a4d4c641e33138251] <==
	I1227 20:18:30.307997       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:18:30.361209       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:18:30.462416       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:30.462455       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:18:30.462575       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:18:30.483309       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:18:30.483354       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:18:30.489015       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:18:30.489353       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:18:30.489378       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:18:30.490767       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:18:30.490793       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:18:30.490807       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:18:30.490794       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:18:30.490769       1 config.go:200] "Starting service config controller"
	I1227 20:18:30.490853       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:18:30.490902       1 config.go:309] "Starting node config controller"
	I1227 20:18:30.490951       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:18:30.591977       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:18:30.591995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:18:30.592004       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:18:30.592050       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4387c47adc01cbe0fb87a870b9c03c95144124b9028676daacbb7207b13bb5fa] <==
	E1227 20:18:21.910492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:18:21.910542       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:18:21.910599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:18:21.910679       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:18:21.910723       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:18:21.910773       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:18:21.910782       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:18:21.910798       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:18:21.910851       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:18:21.910862       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:18:21.910999       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:18:21.911530       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:18:21.911562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:18:22.774645       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:18:22.885441       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:18:22.894640       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:18:22.907130       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:18:23.006778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:18:23.013741       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:18:23.018972       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:18:23.038438       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:18:23.039256       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:18:23.115907       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:18:23.300129       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1227 20:18:25.500832       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:18:29 pause-260501 kubelet[1278]: I1227 20:18:29.949693    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsv5k\" (UniqueName: \"kubernetes.io/projected/64f8ad63-3c0a-4471-94f7-82f0fae0557c-kube-api-access-nsv5k\") pod \"kindnet-6b2pm\" (UID: \"64f8ad63-3c0a-4471-94f7-82f0fae0557c\") " pod="kube-system/kindnet-6b2pm"
	Dec 27 20:18:30 pause-260501 kubelet[1278]: I1227 20:18:30.370945    1278 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-clszm" podStartSLOduration=1.3709260699999999 podStartE2EDuration="1.37092607s" podCreationTimestamp="2025-12-27 20:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:18:30.370527622 +0000 UTC m=+6.143401297" watchObservedRunningTime="2025-12-27 20:18:30.37092607 +0000 UTC m=+6.143799736"
	Dec 27 20:18:31 pause-260501 kubelet[1278]: E1227 20:18:31.774320    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-260501" containerName="kube-apiserver"
	Dec 27 20:18:33 pause-260501 kubelet[1278]: E1227 20:18:33.089560    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-260501" containerName="etcd"
	Dec 27 20:18:33 pause-260501 kubelet[1278]: E1227 20:18:33.657781    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-260501" containerName="kube-controller-manager"
	Dec 27 20:18:34 pause-260501 kubelet[1278]: I1227 20:18:34.433455    1278 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-6b2pm" podStartSLOduration=1.958661194 podStartE2EDuration="5.433438725s" podCreationTimestamp="2025-12-27 20:18:29 +0000 UTC" firstStartedPulling="2025-12-27 20:18:30.192689248 +0000 UTC m=+5.965562914" lastFinishedPulling="2025-12-27 20:18:33.66746678 +0000 UTC m=+9.440340445" observedRunningTime="2025-12-27 20:18:34.433370807 +0000 UTC m=+10.206244481" watchObservedRunningTime="2025-12-27 20:18:34.433438725 +0000 UTC m=+10.206312399"
	Dec 27 20:18:39 pause-260501 kubelet[1278]: E1227 20:18:39.781630    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-260501" containerName="kube-scheduler"
	Dec 27 20:18:41 pause-260501 kubelet[1278]: E1227 20:18:41.780633    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-260501" containerName="kube-apiserver"
	Dec 27 20:18:43 pause-260501 kubelet[1278]: E1227 20:18:43.090362    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-260501" containerName="etcd"
	Dec 27 20:18:43 pause-260501 kubelet[1278]: E1227 20:18:43.662372    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-260501" containerName="kube-controller-manager"
	Dec 27 20:18:44 pause-260501 kubelet[1278]: I1227 20:18:44.651478    1278 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 20:18:44 pause-260501 kubelet[1278]: I1227 20:18:44.763244    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10d76dd3-476a-444b-9a7a-9aafadc0f4c6-config-volume\") pod \"coredns-7d764666f9-zsngz\" (UID: \"10d76dd3-476a-444b-9a7a-9aafadc0f4c6\") " pod="kube-system/coredns-7d764666f9-zsngz"
	Dec 27 20:18:44 pause-260501 kubelet[1278]: I1227 20:18:44.763289    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5x5x\" (UniqueName: \"kubernetes.io/projected/10d76dd3-476a-444b-9a7a-9aafadc0f4c6-kube-api-access-h5x5x\") pod \"coredns-7d764666f9-zsngz\" (UID: \"10d76dd3-476a-444b-9a7a-9aafadc0f4c6\") " pod="kube-system/coredns-7d764666f9-zsngz"
	Dec 27 20:18:45 pause-260501 kubelet[1278]: E1227 20:18:45.403130    1278 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-zsngz" containerName="coredns"
	Dec 27 20:18:45 pause-260501 kubelet[1278]: I1227 20:18:45.437422    1278 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-zsngz" podStartSLOduration=16.437401194 podStartE2EDuration="16.437401194s" podCreationTimestamp="2025-12-27 20:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:18:45.421124914 +0000 UTC m=+21.193998589" watchObservedRunningTime="2025-12-27 20:18:45.437401194 +0000 UTC m=+21.210274868"
	Dec 27 20:18:46 pause-260501 kubelet[1278]: E1227 20:18:46.405454    1278 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-zsngz" containerName="coredns"
	Dec 27 20:18:47 pause-260501 kubelet[1278]: E1227 20:18:47.408014    1278 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-zsngz" containerName="coredns"
	Dec 27 20:18:48 pause-260501 kubelet[1278]: W1227 20:18:48.408587    1278 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 27 20:18:48 pause-260501 kubelet[1278]: E1227 20:18:48.408699    1278 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 27 20:18:48 pause-260501 kubelet[1278]: E1227 20:18:48.408750    1278 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 27 20:18:48 pause-260501 kubelet[1278]: E1227 20:18:48.408766    1278 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 27 20:18:54 pause-260501 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:18:54 pause-260501 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:18:54 pause-260501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:18:54 pause-260501 systemd[1]: kubelet.service: Consumed 1.339s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-260501 -n pause-260501
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-260501 -n pause-260501: exit status 2 (310.696471ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-260501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-260501
helpers_test.go:244: (dbg) docker inspect pause-260501:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef",
	        "Created": "2025-12-27T20:18:08.236880261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 172832,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:18:08.750685284Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef/hosts",
	        "LogPath": "/var/lib/docker/containers/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef/73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef-json.log",
	        "Name": "/pause-260501",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-260501:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-260501",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "73cc2f8d3fab2e57305cfc83844ea3b201f329b7ecbfbf8961374e0c0dda73ef",
	                "LowerDir": "/var/lib/docker/overlay2/879cbdbd5a4977584a35cb61897e185f8a8227fe572b0e349e527c9eafda72ce-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/879cbdbd5a4977584a35cb61897e185f8a8227fe572b0e349e527c9eafda72ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/879cbdbd5a4977584a35cb61897e185f8a8227fe572b0e349e527c9eafda72ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/879cbdbd5a4977584a35cb61897e185f8a8227fe572b0e349e527c9eafda72ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-260501",
	                "Source": "/var/lib/docker/volumes/pause-260501/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-260501",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-260501",
	                "name.minikube.sigs.k8s.io": "pause-260501",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a8093a7066c3fb95641d14ad577c93ce8e14d3ac819cf5ca39ac9c00646ce383",
	            "SandboxKey": "/var/run/docker/netns/a8093a7066c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-260501": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0469b276cb64b2333ed97422179f2b2e80ce4dc2ac593125d2c72990acc87671",
	                    "EndpointID": "79518ff8a1b8cd37e3512bab3d282854e7eaab282204ac9832740ed1f237d7ee",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "fe:d8:05:04:6d:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-260501",
	                        "73cc2f8d3fab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-260501 -n pause-260501
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-260501 -n pause-260501: exit status 2 (311.18743ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-260501 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-639699 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --cancel-scheduled                                                                                              │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │ 27 Dec 25 20:16 UTC │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │                     │
	│ stop    │ -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:16 UTC │ 27 Dec 25 20:17 UTC │
	│ delete  │ -p scheduled-stop-639699                                                                                                                 │ scheduled-stop-639699       │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:17 UTC │
	│ start   │ -p insufficient-storage-352558 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-352558 │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │                     │
	│ delete  │ -p insufficient-storage-352558                                                                                                           │ insufficient-storage-352558 │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:17 UTC │
	│ start   │ -p pause-260501 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-260501                │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p offline-crio-240096 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-240096         │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p force-systemd-env-287564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-287564    │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p stopped-upgrade-379247 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-379247      │ jenkins │ v1.35.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	│ delete  │ -p force-systemd-env-287564                                                                                                              │ force-systemd-env-287564    │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p missing-upgrade-167772 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-167772      │ jenkins │ v1.35.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ stop    │ stopped-upgrade-379247 stop                                                                                                              │ stopped-upgrade-379247      │ jenkins │ v1.35.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p stopped-upgrade-379247 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-379247      │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │                     │
	│ start   │ -p pause-260501 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-260501                │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ delete  │ -p offline-crio-240096                                                                                                                   │ offline-crio-240096         │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │ 27 Dec 25 20:18 UTC │
	│ start   │ -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-498227   │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │                     │
	│ pause   │ -p pause-260501 --alsologtostderr -v=5                                                                                                   │ pause-260501                │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │                     │
	│ start   │ -p missing-upgrade-167772 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-167772      │ jenkins │ v1.37.0 │ 27 Dec 25 20:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:18:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:18:54.821372  189369 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:18:54.821650  189369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:18:54.821660  189369 out.go:374] Setting ErrFile to fd 2...
	I1227 20:18:54.821664  189369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:18:54.821862  189369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:18:54.822291  189369 out.go:368] Setting JSON to false
	I1227 20:18:54.823396  189369 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3684,"bootTime":1766863051,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:18:54.823450  189369 start.go:143] virtualization: kvm guest
	I1227 20:18:54.826866  189369 out.go:179] * [missing-upgrade-167772] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:18:54.828179  189369 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:18:54.828226  189369 notify.go:221] Checking for updates...
	I1227 20:18:54.831564  189369 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:18:54.833294  189369 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:18:54.835674  189369 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:18:54.836995  189369 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:18:54.838322  189369 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:18:54.840035  189369 config.go:182] Loaded profile config "missing-upgrade-167772": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 20:18:54.843003  189369 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 20:18:54.844275  189369 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:18:54.872226  189369 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:18:54.872390  189369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:18:54.938531  189369 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:18:54.928434134 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:18:54.938634  189369 docker.go:319] overlay module found
	I1227 20:18:54.940436  189369 out.go:179] * Using the docker driver based on existing profile
	I1227 20:18:54.941691  189369 start.go:309] selected driver: docker
	I1227 20:18:54.941709  189369 start.go:928] validating driver "docker" against &{Name:missing-upgrade-167772 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-167772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:18:54.941802  189369 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:18:54.942484  189369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:18:55.010033  189369 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:18:55.000198359 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:18:55.010302  189369 cni.go:84] Creating CNI manager for ""
	I1227 20:18:55.010388  189369 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:18:55.010451  189369 start.go:353] cluster config:
	{Name:missing-upgrade-167772 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-167772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:18:55.012247  189369 out.go:179] * Starting "missing-upgrade-167772" primary control-plane node in "missing-upgrade-167772" cluster
	I1227 20:18:55.013342  189369 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:18:55.014598  189369 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:18:55.015647  189369 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 20:18:55.015682  189369 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:18:55.015705  189369 cache.go:65] Caching tarball of preloaded images
	I1227 20:18:55.015745  189369 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1227 20:18:55.015819  189369 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:18:55.015830  189369 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1227 20:18:55.015970  189369 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/missing-upgrade-167772/config.json ...
	I1227 20:18:55.043802  189369 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1227 20:18:55.043828  189369 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1227 20:18:55.043849  189369 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:18:55.043968  189369 start.go:360] acquireMachinesLock for missing-upgrade-167772: {Name:mkc8b5e19e943cc83a6dc66547859d93cd64f2cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:18:55.044080  189369 start.go:364] duration metric: took 75.923µs to acquireMachinesLock for "missing-upgrade-167772"
	I1227 20:18:55.044101  189369 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:18:55.044108  189369 fix.go:54] fixHost starting: 
	I1227 20:18:55.044492  189369 cli_runner.go:164] Run: docker container inspect missing-upgrade-167772 --format={{.State.Status}}
	W1227 20:18:55.067315  189369 cli_runner.go:211] docker container inspect missing-upgrade-167772 --format={{.State.Status}} returned with exit code 1
	I1227 20:18:55.067376  189369 fix.go:112] recreateIfNeeded on missing-upgrade-167772: state= err=unknown state "missing-upgrade-167772": docker container inspect missing-upgrade-167772 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167772
	I1227 20:18:55.067397  189369 fix.go:117] machineExists: false. err=machine does not exist
	I1227 20:18:55.068837  189369 out.go:179] * docker "missing-upgrade-167772" container is missing, will recreate.
	I1227 20:18:50.648374  188109 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:18:50.648669  188109 start.go:159] libmachine.API.Create for "kubernetes-upgrade-498227" (driver="docker")
	I1227 20:18:50.648705  188109 client.go:173] LocalClient.Create starting
	I1227 20:18:50.648778  188109 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem
	I1227 20:18:50.648825  188109 main.go:144] libmachine: Decoding PEM data...
	I1227 20:18:50.648856  188109 main.go:144] libmachine: Parsing certificate...
	I1227 20:18:50.648955  188109 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem
	I1227 20:18:50.648988  188109 main.go:144] libmachine: Decoding PEM data...
	I1227 20:18:50.649009  188109 main.go:144] libmachine: Parsing certificate...
	I1227 20:18:50.649418  188109 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-498227 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:18:50.681858  188109 cli_runner.go:211] docker network inspect kubernetes-upgrade-498227 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:18:50.682097  188109 network_create.go:284] running [docker network inspect kubernetes-upgrade-498227] to gather additional debugging logs...
	I1227 20:18:50.682128  188109 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-498227
	W1227 20:18:50.701374  188109 cli_runner.go:211] docker network inspect kubernetes-upgrade-498227 returned with exit code 1
	I1227 20:18:50.701412  188109 network_create.go:287] error running [docker network inspect kubernetes-upgrade-498227]: docker network inspect kubernetes-upgrade-498227: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-498227 not found
	I1227 20:18:50.701429  188109 network_create.go:289] output of [docker network inspect kubernetes-upgrade-498227]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-498227 not found
	
	** /stderr **
	I1227 20:18:50.701581  188109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:18:50.721770  188109 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0db0ba8938bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:24:76:f1:9a:26} reservation:<nil>}
	I1227 20:18:50.722348  188109 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11f8d597a005 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:b4:6c:7e:ff:91} reservation:<nil>}
	I1227 20:18:50.722856  188109 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7cf3350a110 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:14:0b:19:b4:4d} reservation:<nil>}
	I1227 20:18:50.723412  188109 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0469b276cb64 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:98:2f:52:67:89} reservation:<nil>}
	I1227 20:18:50.724111  188109 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-abdd7ab6beb8 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:52:de:b3:2a:fe:a8} reservation:<nil>}
	I1227 20:18:50.724856  188109 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fa76a0}
	I1227 20:18:50.724888  188109 network_create.go:124] attempt to create docker network kubernetes-upgrade-498227 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1227 20:18:50.724951  188109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-498227 kubernetes-upgrade-498227
	I1227 20:18:50.775733  188109 network_create.go:108] docker network kubernetes-upgrade-498227 192.168.94.0/24 created
	I1227 20:18:50.775769  188109 kic.go:121] calculated static IP "192.168.94.2" for the "kubernetes-upgrade-498227" container
	I1227 20:18:50.775833  188109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:18:50.794869  188109 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-498227 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-498227 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:18:50.812191  188109 oci.go:103] Successfully created a docker volume kubernetes-upgrade-498227
	I1227 20:18:50.812261  188109 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-498227-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-498227 --entrypoint /usr/bin/test -v kubernetes-upgrade-498227:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:18:51.209180  188109 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-498227
	I1227 20:18:51.209262  188109 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 20:18:51.209281  188109 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:18:51.209351  188109 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-498227:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:18:54.401694  188109 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-498227:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.1922302s)
	I1227 20:18:54.401748  188109 kic.go:203] duration metric: took 3.192462199s to extract preloaded images to volume ...
	W1227 20:18:54.401890  188109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 20:18:54.401969  188109 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 20:18:54.402037  188109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:18:54.462132  188109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-498227 --name kubernetes-upgrade-498227 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-498227 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-498227 --network kubernetes-upgrade-498227 --ip 192.168.94.2 --volume kubernetes-upgrade-498227:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:18:54.799459  188109 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-498227 --format={{.State.Running}}
	I1227 20:18:54.819516  188109 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-498227 --format={{.State.Status}}
	I1227 20:18:54.839718  188109 cli_runner.go:164] Run: docker exec kubernetes-upgrade-498227 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:18:54.904211  188109 oci.go:144] the created container "kubernetes-upgrade-498227" has a running status.
	I1227 20:18:54.904246  188109 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/kubernetes-upgrade-498227/id_rsa...
	I1227 20:18:54.950676  188109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-10897/.minikube/machines/kubernetes-upgrade-498227/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:18:54.987839  188109 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-498227 --format={{.State.Status}}
	I1227 20:18:55.011266  188109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:18:55.011286  188109 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-498227 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:18:55.063935  188109 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-498227 --format={{.State.Status}}
	I1227 20:18:55.086213  188109 machine.go:94] provisionDockerMachine start ...
	I1227 20:18:55.086323  188109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-498227
	I1227 20:18:55.107412  188109 main.go:144] libmachine: Using SSH client type: native
	I1227 20:18:55.107939  188109 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1227 20:18:55.107962  188109 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:18:55.108781  188109 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39196->127.0.0.1:32993: read: connection reset by peer
	I1227 20:18:55.130995  183013 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1227 20:18:55.131042  183013 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.15095266Z" level=info msg="RDT not available in the host system"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.150966864Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.15193319Z" level=info msg="Conmon does support the --sync option"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.151952758Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.151968374Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.152892114Z" level=info msg="Conmon does support the --sync option"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.152920716Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.157520101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.157555414Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.158293845Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.159104393Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.159170481Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.248400793Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-zsngz Namespace:kube-system ID:bb78db73bcb71148030dbc3cf14c5a68a41f2d7d5e78a13e49c97e09e34f5ce1 UID:10d76dd3-476a-444b-9a7a-9aafadc0f4c6 NetNS:/var/run/netns/2d940d51-9b5f-4413-9e61-bc6cc173eb2a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00066e138}] Aliases:map[]}"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.248672319Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-zsngz for CNI network kindnet (type=ptp)"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.24926871Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249296467Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.2493434Z" level=info msg="Create NRI interface"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249459807Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249478884Z" level=info msg="runtime interface created"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249493435Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249502012Z" level=info msg="runtime interface starting up..."
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249509452Z" level=info msg="starting plugins..."
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.249534958Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 20:18:50 pause-260501 crio[2198]: time="2025-12-27T20:18:50.250060217Z" level=info msg="No systemd watchdog enabled"
	Dec 27 20:18:50 pause-260501 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	cf03e8762effd       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     13 seconds ago      Running             coredns                   0                   bb78db73bcb71       coredns-7d764666f9-zsngz               kube-system
	dd9c37b6e0ef0       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   25 seconds ago      Running             kindnet-cni               0                   7a786a3aada94       kindnet-6b2pm                          kube-system
	491ed3cccbbbf       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     28 seconds ago      Running             kube-proxy                0                   cc09814536e36       kube-proxy-clszm                       kube-system
	0eb10b2c13532       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     38 seconds ago      Running             kube-controller-manager   0                   0520fdbb3f5ee       kube-controller-manager-pause-260501   kube-system
	8cce63881e55d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     38 seconds ago      Running             etcd                      0                   2f1ef9ceb1542       etcd-pause-260501                      kube-system
	4387c47adc01c       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     38 seconds ago      Running             kube-scheduler            0                   23c88973c41eb       kube-scheduler-pause-260501            kube-system
	f17d36ee3c40f       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     38 seconds ago      Running             kube-apiserver            0                   1a730cda3f421       kube-apiserver-pause-260501            kube-system
	
	
	==> coredns [cf03e8762effdfc8615d0825fc25c76bf309030bca6f5b5dba1cf2bc45ab7b9b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36813 - 59749 "HINFO IN 2477704941545386971.658493180491937193. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.08444164s
	
	
	==> describe nodes <==
	Name:               pause-260501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-260501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=pause-260501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_18_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:18:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-260501
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:18:44 +0000   Sat, 27 Dec 2025 20:18:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:18:44 +0000   Sat, 27 Dec 2025 20:18:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:18:44 +0000   Sat, 27 Dec 2025 20:18:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:18:44 +0000   Sat, 27 Dec 2025 20:18:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-260501
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                1d3de4f9-9d41-4903-b62c-e950bde49614
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-zsngz                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-pause-260501                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-6b2pm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-260501             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-260501    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-clszm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-260501             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node pause-260501 event: Registered Node pause-260501 in Controller
	
	
	==> dmesg <==
	[Dec27 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001882] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393120] i8042: Warning: Keylock active
	[  +0.020152] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501363] block sda: the capability attribute has been deprecated.
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [8cce63881e55d69c5de5a7bdb3a3b915ce0f32cc79d339c7c36adb6f4364da00] <==
	{"level":"info","ts":"2025-12-27T20:18:20.565756Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:18:20.565790Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:18:20.565814Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:18:20.565831Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:18:20.566376Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:18:20.566895Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:18:20.566980Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:18:20.567044Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:18:20.566895Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-260501 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:18:20.567141Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:18:20.567177Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:18:20.567242Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:18:20.567275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:18:20.567224Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:18:20.567365Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T20:18:20.568347Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:18:20.568547Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:18:20.571626Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:18:20.572107Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:18:33.197757Z","caller":"traceutil/trace.go:172","msg":"trace[1020165451] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"101.90908ms","start":"2025-12-27T20:18:33.095827Z","end":"2025-12-27T20:18:33.197736Z","steps":["trace[1020165451] 'process raft request'  (duration: 101.768671ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:18:33.348689Z","caller":"traceutil/trace.go:172","msg":"trace[152766417] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"143.213872ms","start":"2025-12-27T20:18:33.205455Z","end":"2025-12-27T20:18:33.348669Z","steps":["trace[152766417] 'process raft request'  (duration: 120.04293ms)","trace[152766417] 'compare'  (duration: 22.972289ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:18:33.936128Z","caller":"traceutil/trace.go:172","msg":"trace[2115028874] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"157.27219ms","start":"2025-12-27T20:18:33.778836Z","end":"2025-12-27T20:18:33.936108Z","steps":["trace[2115028874] 'process raft request'  (duration: 157.138551ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:18:34.577687Z","caller":"traceutil/trace.go:172","msg":"trace[1835604228] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"140.220729ms","start":"2025-12-27T20:18:34.437448Z","end":"2025-12-27T20:18:34.577669Z","steps":["trace[1835604228] 'process raft request'  (duration: 119.555366ms)","trace[1835604228] 'compare'  (duration: 20.543845ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T20:18:53.177578Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.948309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-260501\" limit:1 ","response":"range_response_count:1 size:4783"}
	{"level":"info","ts":"2025-12-27T20:18:53.177685Z","caller":"traceutil/trace.go:172","msg":"trace[611882997] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-260501; range_end:; response_count:1; response_revision:409; }","duration":"108.072685ms","start":"2025-12-27T20:18:53.069596Z","end":"2025-12-27T20:18:53.177668Z","steps":["trace[611882997] 'range keys from in-memory index tree'  (duration: 107.787109ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:18:58 up  1:01,  0 user,  load average: 3.69, 1.96, 1.38
	Linux pause-260501 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dd9c37b6e0ef0a6ff5fee1507e6965f442c627d3752e5bb000e23710ed200f10] <==
	I1227 20:18:34.273668       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:18:34.274013       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:18:34.274179       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:18:34.274207       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:18:34.274249       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:18:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:18:34.476875       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:18:34.476902       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:18:34.476938       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:18:34.477089       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:18:34.778069       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:18:34.778106       1 metrics.go:72] Registering metrics
	I1227 20:18:34.778166       1 controller.go:711] "Syncing nftables rules"
	I1227 20:18:44.477063       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:18:44.477142       1 main.go:301] handling current node
	I1227 20:18:54.485022       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:18:54.485059       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f17d36ee3c40f02c7a5e2155ee2af467bf6e970c949b40552c24e9b25b0187ba] <==
	I1227 20:18:21.945131       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:18:21.945153       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:18:21.945167       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1227 20:18:21.946100       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:18:21.951022       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:18:21.951088       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:18:21.957294       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:18:22.140486       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:18:22.835396       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:18:22.838748       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:18:22.838765       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:18:23.286181       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:18:23.321468       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:18:23.437480       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:18:23.444248       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 20:18:23.445307       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:18:23.449169       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:18:23.855944       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:18:24.463622       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:18:24.472666       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:18:24.480614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:18:29.314677       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:18:29.320353       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:18:29.408266       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:18:29.857277       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0eb10b2c1353283b73e89aab5227babfbab9aec10d971eb4c5585bc81670d996] <==
	I1227 20:18:28.661387       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661395       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661782       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661791       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661799       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.661808       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.662795       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.662850       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.662874       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.662878       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.663171       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.664100       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.664147       1 range_allocator.go:177] "Sending events to api server"
	I1227 20:18:28.664178       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:18:28.664191       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:18:28.664196       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.664938       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.665092       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.665966       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:18:28.672937       1 range_allocator.go:433] "Set node PodCIDR" node="pause-260501" podCIDRs=["10.244.0.0/24"]
	I1227 20:18:28.764203       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:28.764223       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:18:28.764230       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:18:28.766558       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:48.662015       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [491ed3cccbbbff35a9a2b5d6c4d3c080dde63f3a05f92d6a4d4c641e33138251] <==
	I1227 20:18:30.307997       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:18:30.361209       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:18:30.462416       1 shared_informer.go:377] "Caches are synced"
	I1227 20:18:30.462455       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:18:30.462575       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:18:30.483309       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:18:30.483354       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:18:30.489015       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:18:30.489353       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:18:30.489378       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:18:30.490767       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:18:30.490793       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:18:30.490807       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:18:30.490794       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:18:30.490769       1 config.go:200] "Starting service config controller"
	I1227 20:18:30.490853       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:18:30.490902       1 config.go:309] "Starting node config controller"
	I1227 20:18:30.490951       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:18:30.591977       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:18:30.591995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:18:30.592004       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:18:30.592050       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4387c47adc01cbe0fb87a870b9c03c95144124b9028676daacbb7207b13bb5fa] <==
	E1227 20:18:21.910492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:18:21.910542       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:18:21.910599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:18:21.910679       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:18:21.910723       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:18:21.910773       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:18:21.910782       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:18:21.910798       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:18:21.910851       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:18:21.910862       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:18:21.910999       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:18:21.911530       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:18:21.911562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:18:22.774645       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:18:22.885441       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:18:22.894640       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:18:22.907130       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:18:23.006778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:18:23.013741       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:18:23.018972       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:18:23.038438       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:18:23.039256       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:18:23.115907       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:18:23.300129       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1227 20:18:25.500832       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:18:29 pause-260501 kubelet[1278]: I1227 20:18:29.949693    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsv5k\" (UniqueName: \"kubernetes.io/projected/64f8ad63-3c0a-4471-94f7-82f0fae0557c-kube-api-access-nsv5k\") pod \"kindnet-6b2pm\" (UID: \"64f8ad63-3c0a-4471-94f7-82f0fae0557c\") " pod="kube-system/kindnet-6b2pm"
	Dec 27 20:18:30 pause-260501 kubelet[1278]: I1227 20:18:30.370945    1278 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-clszm" podStartSLOduration=1.3709260699999999 podStartE2EDuration="1.37092607s" podCreationTimestamp="2025-12-27 20:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:18:30.370527622 +0000 UTC m=+6.143401297" watchObservedRunningTime="2025-12-27 20:18:30.37092607 +0000 UTC m=+6.143799736"
	Dec 27 20:18:31 pause-260501 kubelet[1278]: E1227 20:18:31.774320    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-260501" containerName="kube-apiserver"
	Dec 27 20:18:33 pause-260501 kubelet[1278]: E1227 20:18:33.089560    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-260501" containerName="etcd"
	Dec 27 20:18:33 pause-260501 kubelet[1278]: E1227 20:18:33.657781    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-260501" containerName="kube-controller-manager"
	Dec 27 20:18:34 pause-260501 kubelet[1278]: I1227 20:18:34.433455    1278 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-6b2pm" podStartSLOduration=1.958661194 podStartE2EDuration="5.433438725s" podCreationTimestamp="2025-12-27 20:18:29 +0000 UTC" firstStartedPulling="2025-12-27 20:18:30.192689248 +0000 UTC m=+5.965562914" lastFinishedPulling="2025-12-27 20:18:33.66746678 +0000 UTC m=+9.440340445" observedRunningTime="2025-12-27 20:18:34.433370807 +0000 UTC m=+10.206244481" watchObservedRunningTime="2025-12-27 20:18:34.433438725 +0000 UTC m=+10.206312399"
	Dec 27 20:18:39 pause-260501 kubelet[1278]: E1227 20:18:39.781630    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-260501" containerName="kube-scheduler"
	Dec 27 20:18:41 pause-260501 kubelet[1278]: E1227 20:18:41.780633    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-260501" containerName="kube-apiserver"
	Dec 27 20:18:43 pause-260501 kubelet[1278]: E1227 20:18:43.090362    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-260501" containerName="etcd"
	Dec 27 20:18:43 pause-260501 kubelet[1278]: E1227 20:18:43.662372    1278 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-260501" containerName="kube-controller-manager"
	Dec 27 20:18:44 pause-260501 kubelet[1278]: I1227 20:18:44.651478    1278 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 20:18:44 pause-260501 kubelet[1278]: I1227 20:18:44.763244    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10d76dd3-476a-444b-9a7a-9aafadc0f4c6-config-volume\") pod \"coredns-7d764666f9-zsngz\" (UID: \"10d76dd3-476a-444b-9a7a-9aafadc0f4c6\") " pod="kube-system/coredns-7d764666f9-zsngz"
	Dec 27 20:18:44 pause-260501 kubelet[1278]: I1227 20:18:44.763289    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5x5x\" (UniqueName: \"kubernetes.io/projected/10d76dd3-476a-444b-9a7a-9aafadc0f4c6-kube-api-access-h5x5x\") pod \"coredns-7d764666f9-zsngz\" (UID: \"10d76dd3-476a-444b-9a7a-9aafadc0f4c6\") " pod="kube-system/coredns-7d764666f9-zsngz"
	Dec 27 20:18:45 pause-260501 kubelet[1278]: E1227 20:18:45.403130    1278 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-zsngz" containerName="coredns"
	Dec 27 20:18:45 pause-260501 kubelet[1278]: I1227 20:18:45.437422    1278 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-zsngz" podStartSLOduration=16.437401194 podStartE2EDuration="16.437401194s" podCreationTimestamp="2025-12-27 20:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:18:45.421124914 +0000 UTC m=+21.193998589" watchObservedRunningTime="2025-12-27 20:18:45.437401194 +0000 UTC m=+21.210274868"
	Dec 27 20:18:46 pause-260501 kubelet[1278]: E1227 20:18:46.405454    1278 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-zsngz" containerName="coredns"
	Dec 27 20:18:47 pause-260501 kubelet[1278]: E1227 20:18:47.408014    1278 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-zsngz" containerName="coredns"
	Dec 27 20:18:48 pause-260501 kubelet[1278]: W1227 20:18:48.408587    1278 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 27 20:18:48 pause-260501 kubelet[1278]: E1227 20:18:48.408699    1278 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 27 20:18:48 pause-260501 kubelet[1278]: E1227 20:18:48.408750    1278 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 27 20:18:48 pause-260501 kubelet[1278]: E1227 20:18:48.408766    1278 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 27 20:18:54 pause-260501 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:18:54 pause-260501 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:18:54 pause-260501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:18:54 pause-260501 systemd[1]: kubelet.service: Consumed 1.339s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-260501 -n pause-260501
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-260501 -n pause-260501: exit status 2 (325.19637ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-260501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-762177 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-762177 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (249.843737ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:27:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-762177 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-762177 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-762177 describe deploy/metrics-server -n kube-system: exit status 1 (57.923518ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-762177 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-762177
helpers_test.go:244: (dbg) docker inspect old-k8s-version-762177:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444",
	        "Created": "2025-12-27T20:26:31.0677059Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299389,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:26:31.116361816Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/hostname",
	        "HostsPath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/hosts",
	        "LogPath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444-json.log",
	        "Name": "/old-k8s-version-762177",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-762177:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-762177",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444",
	                "LowerDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-762177",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-762177/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-762177",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-762177",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-762177",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "056cd2d5ec8ddbc64308c441873cc50a2b4e06ee522223aa92f5ffaa272ed280",
	            "SandboxKey": "/var/run/docker/netns/056cd2d5ec8d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-762177": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bbffe05820d013a4fca696f72125227eec8cd0ee61afcb8620d53b5d2291b7b7",
	                    "EndpointID": "dde7e03b0bd2047cb94671c7d6999eac2fdd70a92c76262bce54a1ef77eef101",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "46:b6:b8:fc:72:5d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-762177",
	                        "b10dcfebdaaf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-762177 -n old-k8s-version-762177
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-762177 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-762177 logs -n 25: (1.073826362s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-436655 sudo systemctl status kubelet --all --full --no-pager                                                                      │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo systemctl cat kubelet --no-pager                                                                                      │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo systemctl status docker --all --full --no-pager                                                                       │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p flannel-436655 sudo systemctl cat docker --no-pager                                                                                       │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo cat /etc/docker/daemon.json                                                                                           │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p flannel-436655 sudo docker system info                                                                                                    │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p flannel-436655 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p flannel-436655 sudo systemctl cat cri-docker --no-pager                                                                                   │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p flannel-436655 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo cri-dockerd --version                                                                                                 │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo systemctl status containerd --all --full --no-pager                                                                   │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p flannel-436655 sudo systemctl cat containerd --no-pager                                                                                   │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo cat /lib/systemd/system/containerd.service                                                                            │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo cat /etc/containerd/config.toml                                                                                       │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo containerd config dump                                                                                                │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo systemctl status crio --all --full --no-pager                                                                         │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo systemctl cat crio --no-pager                                                                                         │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p flannel-436655 sudo crio config                                                                                                           │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ delete  │ -p flannel-436655                                                                                                                            │ flannel-436655         │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-762177 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-762177 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:26:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:26:43.663068  305296 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:26:43.663302  305296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:26:43.663311  305296 out.go:374] Setting ErrFile to fd 2...
	I1227 20:26:43.663315  305296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:26:43.663493  305296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:26:43.663965  305296 out.go:368] Setting JSON to false
	I1227 20:26:43.665171  305296 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4153,"bootTime":1766863051,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:26:43.665228  305296 start.go:143] virtualization: kvm guest
	I1227 20:26:43.667120  305296 out.go:179] * [no-preload-014435] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:26:43.668543  305296 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:26:43.668546  305296 notify.go:221] Checking for updates...
	I1227 20:26:43.670944  305296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:26:43.672020  305296 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:26:43.673325  305296 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:26:43.674661  305296 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:26:43.675782  305296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:26:43.677370  305296 config.go:182] Loaded profile config "bridge-436655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:26:43.677514  305296 config.go:182] Loaded profile config "flannel-436655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:26:43.677637  305296 config.go:182] Loaded profile config "old-k8s-version-762177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:26:43.677754  305296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:26:43.703936  305296 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:26:43.704087  305296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:26:43.771565  305296 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 20:26:43.760774122 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:26:43.771660  305296 docker.go:319] overlay module found
	I1227 20:26:43.774996  305296 out.go:179] * Using the docker driver based on user configuration
	I1227 20:26:43.775981  305296 start.go:309] selected driver: docker
	I1227 20:26:43.775997  305296 start.go:928] validating driver "docker" against <nil>
	I1227 20:26:43.776008  305296 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:26:43.776621  305296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:26:43.842353  305296 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 20:26:43.830069998 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:26:43.842596  305296 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:26:43.842854  305296 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:26:43.844878  305296 out.go:179] * Using Docker driver with root privileges
	I1227 20:26:43.846043  305296 cni.go:84] Creating CNI manager for ""
	I1227 20:26:43.846144  305296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:26:43.846160  305296 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:26:43.846253  305296 start.go:353] cluster config:
	{Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:26:43.847529  305296 out.go:179] * Starting "no-preload-014435" primary control-plane node in "no-preload-014435" cluster
	I1227 20:26:43.848597  305296 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:26:43.849680  305296 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:26:43.850824  305296 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:26:43.850956  305296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:26:43.851163  305296 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/config.json ...
	I1227 20:26:43.851134  305296 cache.go:107] acquiring lock: {Name:mkbf8013e304cf72565565ec73d6e8c841102548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:26:43.851187  305296 cache.go:107] acquiring lock: {Name:mk73abfdc6ada091682c2dbf6848af1c08b22aba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:26:43.851222  305296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/config.json: {Name:mk92b5a0f9009ab2c8ec69411a92208ee3ef5475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:26:43.851223  305296 cache.go:107] acquiring lock: {Name:mkd41fdff83db10f19a9aaf39c82eac8b62c593e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:26:43.851283  305296 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 20:26:43.851295  305296 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 121.81µs
	I1227 20:26:43.851305  305296 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 20:26:43.851132  305296 cache.go:107] acquiring lock: {Name:mkbccac0bb664dd93154dd51e6d66db53713b44e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:26:43.851321  305296 cache.go:107] acquiring lock: {Name:mk823e851565ecb36a02ad5b6a0d4a7df2dfa5e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:26:43.851340  305296 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 20:26:43.851348  305296 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 240.748µs
	I1227 20:26:43.851356  305296 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 20:26:43.851196  305296 cache.go:107] acquiring lock: {Name:mkc7c9b6d0e03c1b5aa41438b1790f395d1e5f80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:26:43.851363  305296 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 20:26:43.851371  305296 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 52.709µs
	I1227 20:26:43.851382  305296 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 20:26:43.851391  305296 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 20:26:43.851383  305296 cache.go:107] acquiring lock: {Name:mk6e960fa523b2517ada6348a0c0342dcc4edad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:26:43.851403  305296 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 182.462µs
	I1227 20:26:43.851417  305296 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 20:26:43.851399  305296 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 20:26:43.851433  305296 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 326.943µs
	I1227 20:26:43.851441  305296 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 20:26:43.851278  305296 cache.go:107] acquiring lock: {Name:mk2782c5d3ecb08952ecec421a44319fef36b52f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:26:43.851484  305296 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 20:26:43.851499  305296 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 124.892µs
	I1227 20:26:43.851522  305296 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 20:26:43.851545  305296 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 20:26:43.851557  305296 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1227 20:26:43.851556  305296 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 363.573µs
	I1227 20:26:43.851565  305296 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 20:26:43.851568  305296 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 291.946µs
	I1227 20:26:43.851577  305296 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 20:26:43.851585  305296 cache.go:87] Successfully saved all images to host disk.
	I1227 20:26:43.881042  305296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:26:43.881078  305296 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:26:43.881098  305296 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:26:43.881143  305296 start.go:360] acquireMachinesLock for no-preload-014435: {Name:mk1127162727b27a4df39db89b47542aea8edc3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:26:43.881284  305296 start.go:364] duration metric: took 99.641µs to acquireMachinesLock for "no-preload-014435"
	I1227 20:26:43.881318  305296 start.go:93] Provisioning new machine with config: &{Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:26:43.881508  305296 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:26:43.641001  297482 out.go:252]   - Configuring RBAC rules ...
	I1227 20:26:43.641156  297482 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:26:43.645703  297482 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:26:43.652956  297482 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:26:43.656133  297482 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:26:43.658569  297482 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:26:43.661312  297482 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:26:43.673143  297482 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:26:43.879009  297482 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:26:44.049294  297482 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:26:44.050400  297482 kubeadm.go:319] 
	I1227 20:26:44.050496  297482 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:26:44.050516  297482 kubeadm.go:319] 
	I1227 20:26:44.050626  297482 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:26:44.050637  297482 kubeadm.go:319] 
	I1227 20:26:44.050670  297482 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:26:44.050746  297482 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:26:44.050817  297482 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:26:44.050826  297482 kubeadm.go:319] 
	I1227 20:26:44.050902  297482 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:26:44.050945  297482 kubeadm.go:319] 
	I1227 20:26:44.051038  297482 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:26:44.051065  297482 kubeadm.go:319] 
	I1227 20:26:44.051140  297482 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:26:44.051268  297482 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:26:44.051364  297482 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:26:44.051373  297482 kubeadm.go:319] 
	I1227 20:26:44.051487  297482 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:26:44.051620  297482 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:26:44.051630  297482 kubeadm.go:319] 
	I1227 20:26:44.051735  297482 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6bopjc.9u8nwbscqn6nh1ad \
	I1227 20:26:44.051896  297482 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 \
	I1227 20:26:44.051942  297482 kubeadm.go:319] 	--control-plane 
	I1227 20:26:44.051951  297482 kubeadm.go:319] 
	I1227 20:26:44.052093  297482 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:26:44.052109  297482 kubeadm.go:319] 
	I1227 20:26:44.052234  297482 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6bopjc.9u8nwbscqn6nh1ad \
	I1227 20:26:44.052370  297482 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 
	I1227 20:26:44.054722  297482 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 20:26:44.054855  297482 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:26:44.054885  297482 cni.go:84] Creating CNI manager for ""
	I1227 20:26:44.054897  297482 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:26:44.057464  297482 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 20:26:44.059367  297482 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:26:44.064135  297482 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1227 20:26:44.064157  297482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:26:44.078648  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1227 20:26:40.040881  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	W1227 20:26:42.537349  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	W1227 20:26:44.538610  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	I1227 20:26:41.744242  288579 system_pods.go:86] 7 kube-system pods found
	I1227 20:26:41.744281  288579 system_pods.go:89] "coredns-7d764666f9-2xbkx" [3260e3f8-a42c-40a9-a365-5144d9cfc931] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:26:41.744287  288579 system_pods.go:89] "etcd-flannel-436655" [d99f876b-be2e-4e9c-9b37-497cea37d590] Running
	I1227 20:26:41.744294  288579 system_pods.go:89] "kube-apiserver-flannel-436655" [b41bae15-ef3e-4298-8603-e7b898a993a9] Running
	I1227 20:26:41.744300  288579 system_pods.go:89] "kube-controller-manager-flannel-436655" [9842e6a6-f0ed-4c38-8f32-4679bf765303] Running
	I1227 20:26:41.744306  288579 system_pods.go:89] "kube-proxy-qmhrs" [081d9459-f93e-4d76-b6b4-fc2aeaebe91e] Running
	I1227 20:26:41.744314  288579 system_pods.go:89] "kube-scheduler-flannel-436655" [0001c718-d377-41d3-ba88-9140aa1ac433] Running
	I1227 20:26:41.744319  288579 system_pods.go:89] "storage-provisioner" [436621b0-382e-4b85-ae32-e70d87bccd60] Running
	I1227 20:26:45.764756  288579 system_pods.go:86] 7 kube-system pods found
	I1227 20:26:45.764784  288579 system_pods.go:89] "coredns-7d764666f9-2xbkx" [3260e3f8-a42c-40a9-a365-5144d9cfc931] Running
	I1227 20:26:45.764789  288579 system_pods.go:89] "etcd-flannel-436655" [d99f876b-be2e-4e9c-9b37-497cea37d590] Running
	I1227 20:26:45.764793  288579 system_pods.go:89] "kube-apiserver-flannel-436655" [b41bae15-ef3e-4298-8603-e7b898a993a9] Running
	I1227 20:26:45.764796  288579 system_pods.go:89] "kube-controller-manager-flannel-436655" [9842e6a6-f0ed-4c38-8f32-4679bf765303] Running
	I1227 20:26:45.764801  288579 system_pods.go:89] "kube-proxy-qmhrs" [081d9459-f93e-4d76-b6b4-fc2aeaebe91e] Running
	I1227 20:26:45.764804  288579 system_pods.go:89] "kube-scheduler-flannel-436655" [0001c718-d377-41d3-ba88-9140aa1ac433] Running
	I1227 20:26:45.764807  288579 system_pods.go:89] "storage-provisioner" [436621b0-382e-4b85-ae32-e70d87bccd60] Running
	I1227 20:26:45.764815  288579 system_pods.go:126] duration metric: took 17.356361131s to wait for k8s-apps to be running ...
	I1227 20:26:45.764822  288579 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:26:45.764863  288579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:26:45.778294  288579 system_svc.go:56] duration metric: took 13.462541ms WaitForService to wait for kubelet
	I1227 20:26:45.778324  288579 kubeadm.go:587] duration metric: took 21.731546303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:26:45.778343  288579 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:26:45.780907  288579 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:26:45.780946  288579 node_conditions.go:123] node cpu capacity is 8
	I1227 20:26:45.780962  288579 node_conditions.go:105] duration metric: took 2.613856ms to run NodePressure ...
	I1227 20:26:45.780977  288579 start.go:242] waiting for startup goroutines ...
	I1227 20:26:45.780987  288579 start.go:247] waiting for cluster config update ...
	I1227 20:26:45.780996  288579 start.go:256] writing updated cluster config ...
	I1227 20:26:45.781252  288579 ssh_runner.go:195] Run: rm -f paused
	I1227 20:26:45.784825  288579 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:26:45.787677  288579 pod_ready.go:83] waiting for pod "coredns-7d764666f9-2xbkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:45.791460  288579 pod_ready.go:94] pod "coredns-7d764666f9-2xbkx" is "Ready"
	I1227 20:26:45.791480  288579 pod_ready.go:86] duration metric: took 3.785397ms for pod "coredns-7d764666f9-2xbkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:45.793140  288579 pod_ready.go:83] waiting for pod "etcd-flannel-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:45.796446  288579 pod_ready.go:94] pod "etcd-flannel-436655" is "Ready"
	I1227 20:26:45.796468  288579 pod_ready.go:86] duration metric: took 3.31049ms for pod "etcd-flannel-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:45.798252  288579 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:45.801417  288579 pod_ready.go:94] pod "kube-apiserver-flannel-436655" is "Ready"
	I1227 20:26:45.801434  288579 pod_ready.go:86] duration metric: took 3.162121ms for pod "kube-apiserver-flannel-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:45.803017  288579 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:46.188175  288579 pod_ready.go:94] pod "kube-controller-manager-flannel-436655" is "Ready"
	I1227 20:26:46.188210  288579 pod_ready.go:86] duration metric: took 385.170607ms for pod "kube-controller-manager-flannel-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:46.389053  288579 pod_ready.go:83] waiting for pod "kube-proxy-qmhrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:46.789375  288579 pod_ready.go:94] pod "kube-proxy-qmhrs" is "Ready"
	I1227 20:26:46.789399  288579 pod_ready.go:86] duration metric: took 400.322735ms for pod "kube-proxy-qmhrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:46.988301  288579 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:47.389271  288579 pod_ready.go:94] pod "kube-scheduler-flannel-436655" is "Ready"
	I1227 20:26:47.389293  288579 pod_ready.go:86] duration metric: took 400.968945ms for pod "kube-scheduler-flannel-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:26:47.389305  288579 pod_ready.go:40] duration metric: took 1.60445667s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:26:47.433485  288579 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:26:47.435231  288579 out.go:179] * Done! kubectl is now configured to use "flannel-436655" cluster and "default" namespace by default
	I1227 20:26:43.883120  305296 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:26:43.883445  305296 start.go:159] libmachine.API.Create for "no-preload-014435" (driver="docker")
	I1227 20:26:43.883480  305296 client.go:173] LocalClient.Create starting
	I1227 20:26:43.883562  305296 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem
	I1227 20:26:43.883602  305296 main.go:144] libmachine: Decoding PEM data...
	I1227 20:26:43.883621  305296 main.go:144] libmachine: Parsing certificate...
	I1227 20:26:43.883701  305296 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem
	I1227 20:26:43.883724  305296 main.go:144] libmachine: Decoding PEM data...
	I1227 20:26:43.883740  305296 main.go:144] libmachine: Parsing certificate...
	I1227 20:26:43.884182  305296 cli_runner.go:164] Run: docker network inspect no-preload-014435 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:26:43.909085  305296 cli_runner.go:211] docker network inspect no-preload-014435 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:26:43.909367  305296 network_create.go:284] running [docker network inspect no-preload-014435] to gather additional debugging logs...
	I1227 20:26:43.909391  305296 cli_runner.go:164] Run: docker network inspect no-preload-014435
	W1227 20:26:43.929894  305296 cli_runner.go:211] docker network inspect no-preload-014435 returned with exit code 1
	I1227 20:26:43.929985  305296 network_create.go:287] error running [docker network inspect no-preload-014435]: docker network inspect no-preload-014435: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-014435 not found
	I1227 20:26:43.930010  305296 network_create.go:289] output of [docker network inspect no-preload-014435]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-014435 not found
	
	** /stderr **
	I1227 20:26:43.930119  305296 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:26:43.956358  305296 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0db0ba8938bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:24:76:f1:9a:26} reservation:<nil>}
	I1227 20:26:43.958172  305296 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11f8d597a005 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:b4:6c:7e:ff:91} reservation:<nil>}
	I1227 20:26:43.959271  305296 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7cf3350a110 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:14:0b:19:b4:4d} reservation:<nil>}
	I1227 20:26:43.960100  305296 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1922c45a9728 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:05:b5:d5:9f:f7} reservation:<nil>}
	I1227 20:26:43.960971  305296 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-75571f42d1f9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:9c:b5:c2:89:ce} reservation:<nil>}
	I1227 20:26:43.962311  305296 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f11ac0}
	I1227 20:26:43.962366  305296 network_create.go:124] attempt to create docker network no-preload-014435 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1227 20:26:43.962449  305296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-014435 no-preload-014435
	I1227 20:26:44.015223  305296 network_create.go:108] docker network no-preload-014435 192.168.94.0/24 created
	I1227 20:26:44.015255  305296 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-014435" container
	I1227 20:26:44.015318  305296 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:26:44.033033  305296 cli_runner.go:164] Run: docker volume create no-preload-014435 --label name.minikube.sigs.k8s.io=no-preload-014435 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:26:44.055073  305296 oci.go:103] Successfully created a docker volume no-preload-014435
	I1227 20:26:44.055174  305296 cli_runner.go:164] Run: docker run --rm --name no-preload-014435-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-014435 --entrypoint /usr/bin/test -v no-preload-014435:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:26:44.487830  305296 oci.go:107] Successfully prepared a docker volume no-preload-014435
	I1227 20:26:44.487966  305296 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W1227 20:26:44.488066  305296 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 20:26:44.488104  305296 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 20:26:44.488150  305296 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:26:44.544843  305296 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-014435 --name no-preload-014435 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-014435 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-014435 --network no-preload-014435 --ip 192.168.94.2 --volume no-preload-014435:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:26:44.844765  305296 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Running}}
	I1227 20:26:44.868090  305296 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:26:44.890490  305296 cli_runner.go:164] Run: docker exec no-preload-014435 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:26:44.948571  305296 oci.go:144] the created container "no-preload-014435" has a running status.
	I1227 20:26:44.948620  305296 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa...
	I1227 20:26:45.008768  305296 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:26:45.035976  305296 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:26:45.056783  305296 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:26:45.056805  305296 kic_runner.go:114] Args: [docker exec --privileged no-preload-014435 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:26:45.105409  305296 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:26:45.124870  305296 machine.go:94] provisionDockerMachine start ...
	I1227 20:26:45.124993  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:26:45.148541  305296 main.go:144] libmachine: Using SSH client type: native
	I1227 20:26:45.148887  305296 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1227 20:26:45.148908  305296 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:26:45.150325  305296 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40546->127.0.0.1:33093: read: connection reset by peer
	I1227 20:26:48.273607  305296 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-014435
	
	I1227 20:26:48.273634  305296 ubuntu.go:182] provisioning hostname "no-preload-014435"
	I1227 20:26:48.273689  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:26:48.291256  305296 main.go:144] libmachine: Using SSH client type: native
	I1227 20:26:48.291480  305296 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1227 20:26:48.291495  305296 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-014435 && echo "no-preload-014435" | sudo tee /etc/hostname
	I1227 20:26:48.422698  305296 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-014435
	
	I1227 20:26:48.422770  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:26:48.442198  305296 main.go:144] libmachine: Using SSH client type: native
	I1227 20:26:48.442411  305296 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1227 20:26:48.442427  305296 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-014435' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-014435/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-014435' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:26:48.567335  305296 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:26:48.567362  305296 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:26:48.567392  305296 ubuntu.go:190] setting up certificates
	I1227 20:26:48.567405  305296 provision.go:84] configureAuth start
	I1227 20:26:48.567450  305296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-014435
	I1227 20:26:48.584928  305296 provision.go:143] copyHostCerts
	I1227 20:26:48.584991  305296 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:26:48.585003  305296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:26:48.585072  305296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:26:48.585171  305296 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:26:48.585181  305296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:26:48.585207  305296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:26:48.585263  305296 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:26:48.585270  305296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:26:48.585299  305296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:26:48.585364  305296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.no-preload-014435 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-014435]
	I1227 20:26:48.629974  305296 provision.go:177] copyRemoteCerts
	I1227 20:26:48.630034  305296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:26:48.630072  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:26:48.648535  305296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:26:44.817730  297482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:26:44.817809  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:44.817830  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-762177 minikube.k8s.io/updated_at=2025_12_27T20_26_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=old-k8s-version-762177 minikube.k8s.io/primary=true
	I1227 20:26:44.828108  297482 ops.go:34] apiserver oom_adj: -16
	I1227 20:26:44.913242  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:45.413689  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:45.914019  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:46.414088  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:46.913828  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:47.413298  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:47.914025  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:48.414199  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:48.914247  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1227 20:26:47.037683  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	W1227 20:26:49.038371  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	I1227 20:26:48.739990  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:26:48.759403  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:26:48.777064  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:26:48.794239  305296 provision.go:87] duration metric: took 226.812634ms to configureAuth
	I1227 20:26:48.794266  305296 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:26:48.794481  305296 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:26:48.794590  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:26:48.813278  305296 main.go:144] libmachine: Using SSH client type: native
	I1227 20:26:48.813487  305296 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1227 20:26:48.813504  305296 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:26:49.079390  305296 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:26:49.079421  305296 machine.go:97] duration metric: took 3.954524788s to provisionDockerMachine
	I1227 20:26:49.079433  305296 client.go:176] duration metric: took 5.195944594s to LocalClient.Create
	I1227 20:26:49.079457  305296 start.go:167] duration metric: took 5.196011537s to libmachine.API.Create "no-preload-014435"
	I1227 20:26:49.079468  305296 start.go:293] postStartSetup for "no-preload-014435" (driver="docker")
	I1227 20:26:49.079485  305296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:26:49.079575  305296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:26:49.079622  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:26:49.097812  305296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:26:49.189407  305296 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:26:49.192812  305296 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:26:49.192835  305296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:26:49.192845  305296 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:26:49.192905  305296 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:26:49.193001  305296 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:26:49.193094  305296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:26:49.200431  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:26:49.219673  305296 start.go:296] duration metric: took 140.191037ms for postStartSetup
	I1227 20:26:49.220036  305296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-014435
	I1227 20:26:49.237820  305296 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/config.json ...
	I1227 20:26:49.238115  305296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:26:49.238155  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:26:49.254841  305296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:26:49.341674  305296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:26:49.346063  305296 start.go:128] duration metric: took 5.464542847s to createHost
	I1227 20:26:49.346084  305296 start.go:83] releasing machines lock for "no-preload-014435", held for 5.464784613s
	I1227 20:26:49.346149  305296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-014435
	I1227 20:26:49.364559  305296 ssh_runner.go:195] Run: cat /version.json
	I1227 20:26:49.364607  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:26:49.364633  305296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:26:49.364707  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:26:49.382974  305296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:26:49.383414  305296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:26:49.529784  305296 ssh_runner.go:195] Run: systemctl --version
	I1227 20:26:49.537456  305296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:26:49.571976  305296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:26:49.576791  305296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:26:49.576861  305296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:26:49.602819  305296 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 20:26:49.602855  305296 start.go:496] detecting cgroup driver to use...
	I1227 20:26:49.602886  305296 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:26:49.602961  305296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:26:49.622013  305296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:26:49.634424  305296 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:26:49.634481  305296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:26:49.650776  305296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:26:49.667860  305296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:26:49.751269  305296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:26:49.840936  305296 docker.go:234] disabling docker service ...
	I1227 20:26:49.840997  305296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:26:49.859248  305296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:26:49.871258  305296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:26:49.959548  305296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:26:50.046058  305296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:26:50.058171  305296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:26:50.071876  305296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:26:50.071962  305296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:26:50.081870  305296 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:26:50.081951  305296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:26:50.090513  305296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:26:50.098848  305296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:26:50.106987  305296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:26:50.114805  305296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:26:50.123161  305296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:26:50.135798  305296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:26:50.144132  305296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:26:50.151101  305296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:26:50.158243  305296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:26:50.236521  305296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:26:50.368506  305296 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:26:50.368580  305296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:26:50.372534  305296 start.go:574] Will wait 60s for crictl version
	I1227 20:26:50.372589  305296 ssh_runner.go:195] Run: which crictl
	I1227 20:26:50.376182  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:26:50.400501  305296 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:26:50.400580  305296 ssh_runner.go:195] Run: crio --version
	I1227 20:26:50.429894  305296 ssh_runner.go:195] Run: crio --version
	I1227 20:26:50.461028  305296 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:26:50.462144  305296 cli_runner.go:164] Run: docker network inspect no-preload-014435 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:26:50.482768  305296 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 20:26:50.487215  305296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:26:50.497880  305296 kubeadm.go:884] updating cluster {Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:26:50.498021  305296 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:26:50.498058  305296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:26:50.523070  305296 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1227 20:26:50.523099  305296 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1227 20:26:50.523140  305296 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:26:50.523173  305296 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:26:50.523200  305296 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1227 20:26:50.523215  305296 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:26:50.523241  305296 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:26:50.523203  305296 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:26:50.523279  305296 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:26:50.523184  305296 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1227 20:26:50.524260  305296 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:26:50.524459  305296 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:26:50.524566  305296 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1227 20:26:50.524608  305296 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:26:50.524632  305296 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:26:50.524653  305296 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:26:50.524671  305296 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:26:50.524691  305296 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1227 20:26:50.668637  305296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:26:50.686884  305296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:26:50.688798  305296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1227 20:26:50.704499  305296 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1227 20:26:50.704547  305296 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:26:50.704602  305296 ssh_runner.go:195] Run: which crictl
	I1227 20:26:50.706492  305296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:26:50.706555  305296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1227 20:26:50.708272  305296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:26:50.724643  305296 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499" in container runtime
	I1227 20:26:50.724689  305296 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:26:50.724738  305296 ssh_runner.go:195] Run: which crictl
	I1227 20:26:50.727873  305296 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1227 20:26:50.727903  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:26:50.727994  305296 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1227 20:26:50.728051  305296 ssh_runner.go:195] Run: which crictl
	I1227 20:26:50.748190  305296 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1227 20:26:50.748239  305296 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1227 20:26:50.748284  305296 ssh_runner.go:195] Run: which crictl
	I1227 20:26:50.748303  305296 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508" in container runtime
	I1227 20:26:50.748346  305296 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:26:50.748409  305296 ssh_runner.go:195] Run: which crictl
	I1227 20:26:50.750168  305296 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8" in container runtime
	I1227 20:26:50.750203  305296 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:26:50.750238  305296 ssh_runner.go:195] Run: which crictl
	I1227 20:26:50.750357  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:26:50.758277  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:26:50.758300  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 20:26:50.758335  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 20:26:50.758414  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:26:50.758422  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:26:50.769559  305296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:26:50.786206  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:26:50.793705  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 20:26:50.793863  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:26:50.793863  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:26:50.794334  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 20:26:50.794343  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:26:50.830741  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 20:26:50.830776  305296 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1227 20:26:50.830740  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:26:50.830856  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:26:50.830866  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1227 20:26:50.830894  305296 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc" in container runtime
	I1227 20:26:50.830945  305296 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:26:50.830984  305296 ssh_runner.go:195] Run: which crictl
	I1227 20:26:50.831375  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 20:26:50.834530  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:26:50.865313  305296 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1227 20:26:50.865332  305296 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1227 20:26:50.865373  305296 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1227 20:26:50.865397  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1227 20:26:50.865418  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 20:26:50.865448  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:26:50.865474  305296 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1227 20:26:50.865493  305296 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0
	I1227 20:26:50.865504  305296 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0
	I1227 20:26:50.865546  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1227 20:26:50.865606  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 20:26:50.865418  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1227 20:26:50.865606  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 20:26:50.871078  305296 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1227 20:26:50.871106  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (23144960 bytes)
	I1227 20:26:50.872482  305296 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1227 20:26:50.872511  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1227 20:26:50.901349  305296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:26:50.918345  305296 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1227 20:26:50.918382  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (27696640 bytes)
	I1227 20:26:50.918549  305296 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1227 20:26:50.918579  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1227 20:26:50.918695  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:26:50.918774  305296 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1227 20:26:50.918802  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (25791488 bytes)
	I1227 20:26:51.034252  305296 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1227 20:26:51.034332  305296 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1227 20:26:51.034325  305296 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1227 20:26:51.034448  305296 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:26:51.034500  305296 ssh_runner.go:195] Run: which crictl
	I1227 20:26:51.042646  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:26:51.093355  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:26:51.496662  305296 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1227 20:26:51.496700  305296 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 20:26:51.496739  305296 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0
	I1227 20:26:51.496761  305296 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 20:26:51.496836  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 20:26:51.496852  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:26:52.821867  305296 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.324987571s)
	I1227 20:26:52.821900  305296 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.325043332s)
	I1227 20:26:52.821948  305296 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.325161059s)
	I1227 20:26:52.821957  305296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:26:52.821972  305296 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I1227 20:26:52.821994  305296 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1227 20:26:52.822004  305296 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1227 20:26:52.822022  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (17248256 bytes)
	I1227 20:26:52.822040  305296 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1227 20:26:52.857383  305296 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1227 20:26:52.857500  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1227 20:26:49.413565  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:49.914173  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:50.414238  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:50.913849  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:51.414106  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:51.913383  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:52.414001  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:52.913890  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:53.414146  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:53.914137  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1227 20:26:51.538194  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	W1227 20:26:53.538384  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	I1227 20:26:54.414233  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:54.914021  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:55.414107  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:55.913488  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:56.413724  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:56.913926  297482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:26:56.996755  297482 kubeadm.go:1114] duration metric: took 12.179018077s to wait for elevateKubeSystemPrivileges
	I1227 20:26:56.996798  297482 kubeadm.go:403] duration metric: took 21.327899547s to StartCluster
	I1227 20:26:56.996822  297482 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:26:56.996885  297482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:26:56.998103  297482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:26:56.998299  297482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:26:56.998309  297482 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:26:56.998392  297482 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:26:56.998492  297482 config.go:182] Loaded profile config "old-k8s-version-762177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:26:56.998515  297482 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-762177"
	I1227 20:26:56.998493  297482 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-762177"
	I1227 20:26:56.998565  297482 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-762177"
	I1227 20:26:56.998593  297482 host.go:66] Checking if "old-k8s-version-762177" exists ...
	I1227 20:26:56.998546  297482 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-762177"
	I1227 20:26:56.999064  297482 cli_runner.go:164] Run: docker container inspect old-k8s-version-762177 --format={{.State.Status}}
	I1227 20:26:56.999201  297482 cli_runner.go:164] Run: docker container inspect old-k8s-version-762177 --format={{.State.Status}}
	I1227 20:26:57.000167  297482 out.go:179] * Verifying Kubernetes components...
	I1227 20:26:57.001929  297482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:26:57.026053  297482 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:26:57.026107  297482 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-762177"
	I1227 20:26:57.026146  297482 host.go:66] Checking if "old-k8s-version-762177" exists ...
	I1227 20:26:57.026606  297482 cli_runner.go:164] Run: docker container inspect old-k8s-version-762177 --format={{.State.Status}}
	I1227 20:26:57.027297  297482 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:26:57.027318  297482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:26:57.027370  297482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762177
	I1227 20:26:57.056353  297482 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:26:57.056615  297482 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:26:57.056694  297482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762177
	I1227 20:26:57.056537  297482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/old-k8s-version-762177/id_rsa Username:docker}
	I1227 20:26:57.087977  297482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/old-k8s-version-762177/id_rsa Username:docker}
	I1227 20:26:57.122041  297482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:26:57.167229  297482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:26:57.174987  297482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:26:57.207779  297482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:26:57.357660  297482 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1227 20:26:57.358866  297482 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-762177" to be "Ready" ...
	I1227 20:26:57.577865  297482 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 20:26:54.131739  305296 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.309674403s)
	I1227 20:26:54.131773  305296 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1227 20:26:54.131811  305296 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 20:26:54.131811  305296 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.274286299s)
	I1227 20:26:54.131876  305296 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 20:26:54.131882  305296 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1227 20:26:54.131925  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1227 20:26:55.831279  305296 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.699372205s)
	I1227 20:26:55.831307  305296 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I1227 20:26:55.831346  305296 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1227 20:26:55.831411  305296 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1227 20:26:57.353558  305296 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.522119584s)
	I1227 20:26:57.353592  305296 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1227 20:26:57.353621  305296 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 20:26:57.353672  305296 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 20:26:58.643552  305296 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.289850096s)
	I1227 20:26:58.643587  305296 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1227 20:26:58.643612  305296 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 20:26:58.643658  305296 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 20:26:57.578900  297482 addons.go:530] duration metric: took 580.512082ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 20:26:57.861486  297482 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-762177" context rescaled to 1 replicas
	W1227 20:26:55.539176  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	W1227 20:26:58.038205  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	I1227 20:26:59.771355  305296 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.127674556s)
	I1227 20:26:59.771383  305296 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1227 20:26:59.771408  305296 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1227 20:26:59.771444  305296 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1227 20:27:00.320588  305296 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1227 20:27:00.320634  305296 cache_images.go:125] Successfully loaded all cached images
	I1227 20:27:00.320641  305296 cache_images.go:94] duration metric: took 9.79752728s to LoadCachedImages
	I1227 20:27:00.320657  305296 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 20:27:00.320767  305296 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-014435 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:27:00.320846  305296 ssh_runner.go:195] Run: crio config
	I1227 20:27:00.367589  305296 cni.go:84] Creating CNI manager for ""
	I1227 20:27:00.367612  305296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:27:00.367629  305296 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:27:00.367659  305296 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-014435 NodeName:no-preload-014435 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:27:00.367822  305296 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-014435"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:27:00.367890  305296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:27:00.376547  305296 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1227 20:27:00.376598  305296 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1227 20:27:00.384498  305296 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubelet.sha256
	I1227 20:27:00.384541  305296 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
	I1227 20:27:00.384553  305296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:27:00.384607  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 20:27:00.384502  305296 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubeadm.sha256
	I1227 20:27:00.384726  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 20:27:00.397574  305296 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1227 20:27:00.397605  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/linux/amd64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (58597560 bytes)
	I1227 20:27:00.397634  305296 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1227 20:27:00.397580  305296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 20:27:00.397656  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/linux/amd64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (72368312 bytes)
	I1227 20:27:00.407216  305296 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1227 20:27:00.407248  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/cache/linux/amd64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (58110244 bytes)
	I1227 20:27:00.960789  305296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:27:00.969256  305296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 20:27:00.982604  305296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:27:01.111699  305296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1227 20:27:01.125109  305296 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:27:01.129071  305296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:27:01.195026  305296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:27:01.278597  305296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:27:01.308140  305296 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435 for IP: 192.168.94.2
	I1227 20:27:01.308166  305296 certs.go:195] generating shared ca certs ...
	I1227 20:27:01.308189  305296 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:01.308362  305296 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:27:01.308401  305296 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:27:01.308410  305296 certs.go:257] generating profile certs ...
	I1227 20:27:01.308464  305296 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/client.key
	I1227 20:27:01.308483  305296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/client.crt with IP's: []
	I1227 20:27:01.429090  305296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/client.crt ...
	I1227 20:27:01.429115  305296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/client.crt: {Name:mk7777f88f2ba94f3eb8ff63717df77f4d9d8431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:01.429283  305296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/client.key ...
	I1227 20:27:01.429296  305296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/client.key: {Name:mka4b4cad0eb6be414dd6cc58f6265965fc9b3a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:01.429375  305296 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key.00c17d97
	I1227 20:27:01.429390  305296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.crt.00c17d97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1227 20:27:01.569040  305296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.crt.00c17d97 ...
	I1227 20:27:01.569066  305296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.crt.00c17d97: {Name:mk61f3e60bff72ecdcd36f976c935bb3ea0c9442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:01.598115  305296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key.00c17d97 ...
	I1227 20:27:01.598141  305296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key.00c17d97: {Name:mkffb4a7a83d6ce21642ab9d8c6174b32565a360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:01.598241  305296 certs.go:382] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.crt.00c17d97 -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.crt
	I1227 20:27:01.598330  305296 certs.go:386] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key.00c17d97 -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key
	I1227 20:27:01.598388  305296 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.key
	I1227 20:27:01.598403  305296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.crt with IP's: []
	I1227 20:27:01.680159  305296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.crt ...
	I1227 20:27:01.680185  305296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.crt: {Name:mkb52645187ff48b062334fbf608d49ad4d20a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:01.680354  305296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.key ...
	I1227 20:27:01.680371  305296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.key: {Name:mk70b4dd228df24e9e0f35d02f31a409f31b90bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:01.680580  305296 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:27:01.680625  305296 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:27:01.680639  305296 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:27:01.680663  305296 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:27:01.680688  305296 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:27:01.680712  305296 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:27:01.680765  305296 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:27:01.681529  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:27:01.699888  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:27:01.716884  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:27:01.734043  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:27:01.752742  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:27:01.770034  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:27:01.787442  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:27:01.804600  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:27:01.822029  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:27:01.839981  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:27:01.857370  305296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:27:01.874174  305296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:27:01.886231  305296 ssh_runner.go:195] Run: openssl version
	I1227 20:27:01.892245  305296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:27:01.899578  305296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:27:01.907188  305296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:27:01.910779  305296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:27:01.910822  305296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:27:01.944104  305296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:27:01.951521  305296 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14427.pem /etc/ssl/certs/51391683.0
	I1227 20:27:01.959215  305296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:27:01.966788  305296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:27:01.974028  305296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:27:01.977611  305296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:27:01.977659  305296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:27:02.012135  305296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:27:02.019589  305296 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/144272.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:27:02.026757  305296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:27:02.034155  305296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:27:02.041878  305296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:27:02.045531  305296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:27:02.045584  305296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:27:02.080129  305296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:27:02.087715  305296 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:27:02.094970  305296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:27:02.098854  305296 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:27:02.098901  305296 kubeadm.go:401] StartCluster: {Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:27:02.099000  305296 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:27:02.099038  305296 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:27:02.125256  305296 cri.go:96] found id: ""
	I1227 20:27:02.125305  305296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:27:02.133308  305296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:27:02.141891  305296 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:27:02.141979  305296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:27:02.150715  305296 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:27:02.150734  305296 kubeadm.go:158] found existing configuration files:
	
	I1227 20:27:02.150782  305296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:27:02.159709  305296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:27:02.159759  305296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:27:02.168261  305296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:27:02.176417  305296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:27:02.176457  305296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:27:02.183751  305296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:27:02.191273  305296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:27:02.191323  305296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:27:02.198454  305296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:27:02.206263  305296 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:27:02.206323  305296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:27:02.213802  305296 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:27:02.248694  305296 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:27:02.248794  305296 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:27:02.311354  305296 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:27:02.311447  305296 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 20:27:02.311494  305296 kubeadm.go:319] OS: Linux
	I1227 20:27:02.311554  305296 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:27:02.311612  305296 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:27:02.311688  305296 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:27:02.311769  305296 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:27:02.311862  305296 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:27:02.311962  305296 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:27:02.312031  305296 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:27:02.312089  305296 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 20:27:02.371054  305296 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:27:02.371202  305296 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:27:02.371357  305296 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:27:02.384176  305296 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:27:02.386338  305296 out.go:252]   - Generating certificates and keys ...
	I1227 20:27:02.386464  305296 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:27:02.386565  305296 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:27:02.449525  305296 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:27:02.498197  305296 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:27:02.547093  305296 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:27:02.620768  305296 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:27:02.689370  305296 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:27:02.689607  305296 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-014435] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1227 20:27:02.715143  305296 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:27:02.715282  305296 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-014435] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1227 20:27:02.776767  305296 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:27:02.979290  305296 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:27:02.998023  305296 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:27:02.998173  305296 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:27:03.032632  305296 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:27:03.132332  305296 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:27:03.287052  305296 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:27:03.309449  305296 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:27:03.473250  305296 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:27:03.473774  305296 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:27:03.477614  305296 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:27:03.479153  305296 out.go:252]   - Booting up control plane ...
	I1227 20:27:03.479277  305296 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:27:03.479390  305296 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:27:03.480103  305296 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:27:03.494413  305296 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:27:03.494536  305296 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:27:03.500710  305296 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:27:03.501078  305296 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:27:03.501143  305296 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:27:03.610852  305296 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:27:03.611060  305296 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1227 20:26:59.362778  297482 node_ready.go:57] node "old-k8s-version-762177" has "Ready":"False" status (will retry)
	W1227 20:27:01.362907  297482 node_ready.go:57] node "old-k8s-version-762177" has "Ready":"False" status (will retry)
	W1227 20:27:03.363212  297482 node_ready.go:57] node "old-k8s-version-762177" has "Ready":"False" status (will retry)
	W1227 20:27:00.538127  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	W1227 20:27:03.038317  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	I1227 20:27:04.112520  305296 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.782728ms
	I1227 20:27:04.115439  305296 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:27:04.115580  305296 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1227 20:27:04.115718  305296 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:27:04.115794  305296 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:27:04.621166  305296 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.584393ms
	I1227 20:27:05.551451  305296 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.43595129s
	I1227 20:27:07.618438  305296 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502855855s
	I1227 20:27:07.635982  305296 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:27:07.645528  305296 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:27:07.653796  305296 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:27:07.654094  305296 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-014435 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:27:07.662129  305296 kubeadm.go:319] [bootstrap-token] Using token: ld7a54.sjan5ckx70x06lby
	I1227 20:27:07.663428  305296 out.go:252]   - Configuring RBAC rules ...
	I1227 20:27:07.663605  305296 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:27:07.666878  305296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:27:07.672188  305296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:27:07.675452  305296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:27:07.677641  305296 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:27:07.680000  305296 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:27:08.023604  305296 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:27:08.438653  305296 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:27:09.024235  305296 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:27:09.025061  305296 kubeadm.go:319] 
	I1227 20:27:09.025143  305296 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:27:09.025155  305296 kubeadm.go:319] 
	I1227 20:27:09.025227  305296 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:27:09.025236  305296 kubeadm.go:319] 
	I1227 20:27:09.025257  305296 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:27:09.025310  305296 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:27:09.025363  305296 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:27:09.025371  305296 kubeadm.go:319] 
	I1227 20:27:09.025423  305296 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:27:09.025429  305296 kubeadm.go:319] 
	I1227 20:27:09.025475  305296 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:27:09.025496  305296 kubeadm.go:319] 
	I1227 20:27:09.025590  305296 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:27:09.025709  305296 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:27:09.025824  305296 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:27:09.025836  305296 kubeadm.go:319] 
	I1227 20:27:09.025988  305296 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:27:09.026126  305296 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:27:09.026139  305296 kubeadm.go:319] 
	I1227 20:27:09.026251  305296 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ld7a54.sjan5ckx70x06lby \
	I1227 20:27:09.026411  305296 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 \
	I1227 20:27:09.026432  305296 kubeadm.go:319] 	--control-plane 
	I1227 20:27:09.026438  305296 kubeadm.go:319] 
	I1227 20:27:09.026555  305296 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:27:09.026574  305296 kubeadm.go:319] 
	I1227 20:27:09.026697  305296 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ld7a54.sjan5ckx70x06lby \
	I1227 20:27:09.026789  305296 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 
	I1227 20:27:09.028898  305296 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 20:27:09.029050  305296 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:27:09.029079  305296 cni.go:84] Creating CNI manager for ""
	I1227 20:27:09.029093  305296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:27:09.030513  305296 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 20:27:05.364979  297482 node_ready.go:57] node "old-k8s-version-762177" has "Ready":"False" status (will retry)
	W1227 20:27:07.862065  297482 node_ready.go:57] node "old-k8s-version-762177" has "Ready":"False" status (will retry)
	W1227 20:27:05.540536  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	W1227 20:27:08.038206  292834 pod_ready.go:104] pod "coredns-7d764666f9-xvqzs" is not "Ready", error: <nil>
	I1227 20:27:09.037861  292834 pod_ready.go:94] pod "coredns-7d764666f9-xvqzs" is "Ready"
	I1227 20:27:09.037885  292834 pod_ready.go:86] duration metric: took 33.005859124s for pod "coredns-7d764666f9-xvqzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:09.037899  292834 pod_ready.go:83] waiting for pod "coredns-7d764666f9-z24nv" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:09.039719  292834 pod_ready.go:99] pod "coredns-7d764666f9-z24nv" in "kube-system" namespace is gone: getting pod "coredns-7d764666f9-z24nv" in "kube-system" namespace (will retry): pods "coredns-7d764666f9-z24nv" not found
	I1227 20:27:09.039737  292834 pod_ready.go:86] duration metric: took 1.831748ms for pod "coredns-7d764666f9-z24nv" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:09.041930  292834 pod_ready.go:83] waiting for pod "etcd-bridge-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:09.045772  292834 pod_ready.go:94] pod "etcd-bridge-436655" is "Ready"
	I1227 20:27:09.045790  292834 pod_ready.go:86] duration metric: took 3.836701ms for pod "etcd-bridge-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:09.047826  292834 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:09.051889  292834 pod_ready.go:94] pod "kube-apiserver-bridge-436655" is "Ready"
	I1227 20:27:09.051941  292834 pod_ready.go:86] duration metric: took 4.088917ms for pod "kube-apiserver-bridge-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:09.053782  292834 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:09.436525  292834 pod_ready.go:94] pod "kube-controller-manager-bridge-436655" is "Ready"
	I1227 20:27:09.436560  292834 pod_ready.go:86] duration metric: took 382.760367ms for pod "kube-controller-manager-bridge-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:09.636410  292834 pod_ready.go:83] waiting for pod "kube-proxy-4gn94" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:10.035852  292834 pod_ready.go:94] pod "kube-proxy-4gn94" is "Ready"
	I1227 20:27:10.035877  292834 pod_ready.go:86] duration metric: took 399.441684ms for pod "kube-proxy-4gn94" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:10.236578  292834 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:10.637254  292834 pod_ready.go:94] pod "kube-scheduler-bridge-436655" is "Ready"
	I1227 20:27:10.637283  292834 pod_ready.go:86] duration metric: took 400.673059ms for pod "kube-scheduler-bridge-436655" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:10.637299  292834 pod_ready.go:40] duration metric: took 34.609094411s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:27:10.688067  292834 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:27:10.689532  292834 out.go:179] * Done! kubectl is now configured to use "bridge-436655" cluster and "default" namespace by default
	W1227 20:27:09.862754  297482 node_ready.go:57] node "old-k8s-version-762177" has "Ready":"False" status (will retry)
	I1227 20:27:10.362305  297482 node_ready.go:49] node "old-k8s-version-762177" is "Ready"
	I1227 20:27:10.362334  297482 node_ready.go:38] duration metric: took 13.003428833s for node "old-k8s-version-762177" to be "Ready" ...
	I1227 20:27:10.362349  297482 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:27:10.362401  297482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:27:10.376342  297482 api_server.go:72] duration metric: took 13.378001156s to wait for apiserver process to appear ...
	I1227 20:27:10.376372  297482 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:27:10.376391  297482 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:27:10.381898  297482 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1227 20:27:10.383299  297482 api_server.go:141] control plane version: v1.28.0
	I1227 20:27:10.383326  297482 api_server.go:131] duration metric: took 6.946552ms to wait for apiserver health ...
	I1227 20:27:10.383336  297482 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:27:10.387503  297482 system_pods.go:59] 8 kube-system pods found
	I1227 20:27:10.387544  297482 system_pods.go:61] "coredns-5dd5756b68-lklgt" [022c6c7c-4655-42a5-8b6f-390cdb0e7623] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:10.387554  297482 system_pods.go:61] "etcd-old-k8s-version-762177" [f7b8f7e2-e8d2-44d1-80be-2e4b7a340485] Running
	I1227 20:27:10.387562  297482 system_pods.go:61] "kindnet-89clv" [93c4fc0d-f1ce-41d7-9c7b-17ed626306c7] Running
	I1227 20:27:10.387570  297482 system_pods.go:61] "kube-apiserver-old-k8s-version-762177" [61d3e10e-a43d-4894-ba40-3dde3a134580] Running
	I1227 20:27:10.387581  297482 system_pods.go:61] "kube-controller-manager-old-k8s-version-762177" [96a852e0-fe6f-4b1b-8e56-818b2e258183] Running
	I1227 20:27:10.387586  297482 system_pods.go:61] "kube-proxy-99q8t" [eadfafbe-2007-418a-a8bd-85fa704db6b6] Running
	I1227 20:27:10.387591  297482 system_pods.go:61] "kube-scheduler-old-k8s-version-762177" [047becb2-f15f-4832-8ab3-8b90f4e045ae] Running
	I1227 20:27:10.387606  297482 system_pods.go:61] "storage-provisioner" [58875037-9be0-4a9d-b2e6-7deb9514ad33] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:10.387613  297482 system_pods.go:74] duration metric: took 4.270721ms to wait for pod list to return data ...
	I1227 20:27:10.387626  297482 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:27:10.390087  297482 default_sa.go:45] found service account: "default"
	I1227 20:27:10.390108  297482 default_sa.go:55] duration metric: took 2.475371ms for default service account to be created ...
	I1227 20:27:10.390118  297482 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:27:10.393021  297482 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:10.393043  297482 system_pods.go:89] "coredns-5dd5756b68-lklgt" [022c6c7c-4655-42a5-8b6f-390cdb0e7623] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:10.393048  297482 system_pods.go:89] "etcd-old-k8s-version-762177" [f7b8f7e2-e8d2-44d1-80be-2e4b7a340485] Running
	I1227 20:27:10.393054  297482 system_pods.go:89] "kindnet-89clv" [93c4fc0d-f1ce-41d7-9c7b-17ed626306c7] Running
	I1227 20:27:10.393058  297482 system_pods.go:89] "kube-apiserver-old-k8s-version-762177" [61d3e10e-a43d-4894-ba40-3dde3a134580] Running
	I1227 20:27:10.393061  297482 system_pods.go:89] "kube-controller-manager-old-k8s-version-762177" [96a852e0-fe6f-4b1b-8e56-818b2e258183] Running
	I1227 20:27:10.393064  297482 system_pods.go:89] "kube-proxy-99q8t" [eadfafbe-2007-418a-a8bd-85fa704db6b6] Running
	I1227 20:27:10.393068  297482 system_pods.go:89] "kube-scheduler-old-k8s-version-762177" [047becb2-f15f-4832-8ab3-8b90f4e045ae] Running
	I1227 20:27:10.393075  297482 system_pods.go:89] "storage-provisioner" [58875037-9be0-4a9d-b2e6-7deb9514ad33] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:10.393114  297482 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 20:27:10.595321  297482 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:10.595370  297482 system_pods.go:89] "coredns-5dd5756b68-lklgt" [022c6c7c-4655-42a5-8b6f-390cdb0e7623] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:10.595378  297482 system_pods.go:89] "etcd-old-k8s-version-762177" [f7b8f7e2-e8d2-44d1-80be-2e4b7a340485] Running
	I1227 20:27:10.595389  297482 system_pods.go:89] "kindnet-89clv" [93c4fc0d-f1ce-41d7-9c7b-17ed626306c7] Running
	I1227 20:27:10.595395  297482 system_pods.go:89] "kube-apiserver-old-k8s-version-762177" [61d3e10e-a43d-4894-ba40-3dde3a134580] Running
	I1227 20:27:10.595400  297482 system_pods.go:89] "kube-controller-manager-old-k8s-version-762177" [96a852e0-fe6f-4b1b-8e56-818b2e258183] Running
	I1227 20:27:10.595405  297482 system_pods.go:89] "kube-proxy-99q8t" [eadfafbe-2007-418a-a8bd-85fa704db6b6] Running
	I1227 20:27:10.595410  297482 system_pods.go:89] "kube-scheduler-old-k8s-version-762177" [047becb2-f15f-4832-8ab3-8b90f4e045ae] Running
	I1227 20:27:10.595416  297482 system_pods.go:89] "storage-provisioner" [58875037-9be0-4a9d-b2e6-7deb9514ad33] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:10.963869  297482 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:10.963909  297482 system_pods.go:89] "coredns-5dd5756b68-lklgt" [022c6c7c-4655-42a5-8b6f-390cdb0e7623] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:10.963931  297482 system_pods.go:89] "etcd-old-k8s-version-762177" [f7b8f7e2-e8d2-44d1-80be-2e4b7a340485] Running
	I1227 20:27:10.963940  297482 system_pods.go:89] "kindnet-89clv" [93c4fc0d-f1ce-41d7-9c7b-17ed626306c7] Running
	I1227 20:27:10.963945  297482 system_pods.go:89] "kube-apiserver-old-k8s-version-762177" [61d3e10e-a43d-4894-ba40-3dde3a134580] Running
	I1227 20:27:10.963952  297482 system_pods.go:89] "kube-controller-manager-old-k8s-version-762177" [96a852e0-fe6f-4b1b-8e56-818b2e258183] Running
	I1227 20:27:10.963957  297482 system_pods.go:89] "kube-proxy-99q8t" [eadfafbe-2007-418a-a8bd-85fa704db6b6] Running
	I1227 20:27:10.963962  297482 system_pods.go:89] "kube-scheduler-old-k8s-version-762177" [047becb2-f15f-4832-8ab3-8b90f4e045ae] Running
	I1227 20:27:10.963971  297482 system_pods.go:89] "storage-provisioner" [58875037-9be0-4a9d-b2e6-7deb9514ad33] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:11.298709  297482 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:11.298746  297482 system_pods.go:89] "coredns-5dd5756b68-lklgt" [022c6c7c-4655-42a5-8b6f-390cdb0e7623] Running
	I1227 20:27:11.298755  297482 system_pods.go:89] "etcd-old-k8s-version-762177" [f7b8f7e2-e8d2-44d1-80be-2e4b7a340485] Running
	I1227 20:27:11.298760  297482 system_pods.go:89] "kindnet-89clv" [93c4fc0d-f1ce-41d7-9c7b-17ed626306c7] Running
	I1227 20:27:11.298766  297482 system_pods.go:89] "kube-apiserver-old-k8s-version-762177" [61d3e10e-a43d-4894-ba40-3dde3a134580] Running
	I1227 20:27:11.298772  297482 system_pods.go:89] "kube-controller-manager-old-k8s-version-762177" [96a852e0-fe6f-4b1b-8e56-818b2e258183] Running
	I1227 20:27:11.298779  297482 system_pods.go:89] "kube-proxy-99q8t" [eadfafbe-2007-418a-a8bd-85fa704db6b6] Running
	I1227 20:27:11.298789  297482 system_pods.go:89] "kube-scheduler-old-k8s-version-762177" [047becb2-f15f-4832-8ab3-8b90f4e045ae] Running
	I1227 20:27:11.298801  297482 system_pods.go:89] "storage-provisioner" [58875037-9be0-4a9d-b2e6-7deb9514ad33] Running
	I1227 20:27:11.298816  297482 system_pods.go:126] duration metric: took 908.691222ms to wait for k8s-apps to be running ...
	I1227 20:27:11.298828  297482 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:27:11.298877  297482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:27:11.312898  297482 system_svc.go:56] duration metric: took 14.061242ms WaitForService to wait for kubelet
	I1227 20:27:11.312963  297482 kubeadm.go:587] duration metric: took 14.314625142s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:27:11.312990  297482 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:27:11.315675  297482 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:27:11.315695  297482 node_conditions.go:123] node cpu capacity is 8
	I1227 20:27:11.315708  297482 node_conditions.go:105] duration metric: took 2.713359ms to run NodePressure ...
	I1227 20:27:11.315719  297482 start.go:242] waiting for startup goroutines ...
	I1227 20:27:11.315725  297482 start.go:247] waiting for cluster config update ...
	I1227 20:27:11.315737  297482 start.go:256] writing updated cluster config ...
	I1227 20:27:11.316048  297482 ssh_runner.go:195] Run: rm -f paused
	I1227 20:27:11.319702  297482 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:27:11.323814  297482 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lklgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:11.328063  297482 pod_ready.go:94] pod "coredns-5dd5756b68-lklgt" is "Ready"
	I1227 20:27:11.328080  297482 pod_ready.go:86] duration metric: took 4.245326ms for pod "coredns-5dd5756b68-lklgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:11.331013  297482 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:11.335241  297482 pod_ready.go:94] pod "etcd-old-k8s-version-762177" is "Ready"
	I1227 20:27:11.335263  297482 pod_ready.go:86] duration metric: took 4.228208ms for pod "etcd-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:11.338079  297482 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:11.342647  297482 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-762177" is "Ready"
	I1227 20:27:11.342673  297482 pod_ready.go:86] duration metric: took 4.575276ms for pod "kube-apiserver-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:11.345443  297482 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:11.726545  297482 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-762177" is "Ready"
	I1227 20:27:11.726608  297482 pod_ready.go:86] duration metric: took 381.142591ms for pod "kube-controller-manager-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:11.927792  297482 pod_ready.go:83] waiting for pod "kube-proxy-99q8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:12.325611  297482 pod_ready.go:94] pod "kube-proxy-99q8t" is "Ready"
	I1227 20:27:12.325642  297482 pod_ready.go:86] duration metric: took 397.821907ms for pod "kube-proxy-99q8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:12.526113  297482 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:12.924904  297482 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-762177" is "Ready"
	I1227 20:27:12.925052  297482 pod_ready.go:86] duration metric: took 398.85721ms for pod "kube-scheduler-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:12.925070  297482 pod_ready.go:40] duration metric: took 1.605337997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:27:12.986872  297482 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1227 20:27:12.988487  297482 out.go:203] 
	W1227 20:27:12.989650  297482 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 20:27:12.990841  297482 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:27:12.992440  297482 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-762177" cluster and "default" namespace by default
	I1227 20:27:09.031498  305296 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:27:09.036385  305296 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:27:09.036401  305296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:27:09.052666  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:27:09.289664  305296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:27:09.289782  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:09.289826  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-014435 minikube.k8s.io/updated_at=2025_12_27T20_27_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=no-preload-014435 minikube.k8s.io/primary=true
	I1227 20:27:09.302361  305296 ops.go:34] apiserver oom_adj: -16
	I1227 20:27:09.368122  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:09.868462  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:10.368605  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:10.869079  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:11.369126  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:11.868547  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:12.368541  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:12.868817  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:13.368661  305296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:13.445609  305296 kubeadm.go:1114] duration metric: took 4.15588378s to wait for elevateKubeSystemPrivileges
	I1227 20:27:13.445639  305296 kubeadm.go:403] duration metric: took 11.346740808s to StartCluster
	I1227 20:27:13.445655  305296 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:13.445727  305296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:27:13.447256  305296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:13.447523  305296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:27:13.447546  305296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:27:13.447629  305296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:27:13.447727  305296 addons.go:70] Setting storage-provisioner=true in profile "no-preload-014435"
	I1227 20:27:13.447767  305296 addons.go:70] Setting default-storageclass=true in profile "no-preload-014435"
	I1227 20:27:13.447862  305296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-014435"
	I1227 20:27:13.447792  305296 addons.go:239] Setting addon storage-provisioner=true in "no-preload-014435"
	I1227 20:27:13.447984  305296 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:27:13.447791  305296 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:27:13.448305  305296 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:27:13.448508  305296 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:27:13.449046  305296 out.go:179] * Verifying Kubernetes components...
	I1227 20:27:13.451394  305296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:27:13.475577  305296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:27:13.476362  305296 addons.go:239] Setting addon default-storageclass=true in "no-preload-014435"
	I1227 20:27:13.476410  305296 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:27:13.476613  305296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:27:13.476628  305296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:27:13.476681  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:27:13.476893  305296 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:27:13.520090  305296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:27:13.523101  305296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:27:13.523124  305296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:27:13.523184  305296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:27:13.551773  305296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:27:13.572145  305296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:27:13.629183  305296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:27:13.656769  305296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:27:13.669334  305296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:27:13.798856  305296 start.go:987] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1227 20:27:13.800273  305296 node_ready.go:35] waiting up to 6m0s for node "no-preload-014435" to be "Ready" ...
	I1227 20:27:14.004662  305296 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 20:27:14.005666  305296 addons.go:530] duration metric: took 558.038634ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 20:27:14.303835  305296 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-014435" context rescaled to 1 replicas
	W1227 20:27:15.803318  305296 node_ready.go:57] node "no-preload-014435" has "Ready":"False" status (will retry)
	W1227 20:27:17.803985  305296 node_ready.go:57] node "no-preload-014435" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 27 20:27:10 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:10.610229857Z" level=info msg="Starting container: 6fde638d64a8538e5d1060db32cdf062283998597675375b1737492d9e419826" id=bbce5f9a-137f-4006-a929-8861ac79a45b name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:27:10 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:10.612381346Z" level=info msg="Started container" PID=2145 containerID=6fde638d64a8538e5d1060db32cdf062283998597675375b1737492d9e419826 description=kube-system/coredns-5dd5756b68-lklgt/coredns id=bbce5f9a-137f-4006-a929-8861ac79a45b name=/runtime.v1.RuntimeService/StartContainer sandboxID=401352d97d015d55a7c304712117c55df1c32debf9d1603e4111f734f35aae65
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.506966523Z" level=info msg="Running pod sandbox: default/busybox/POD" id=dc196aa2-21c4-4f3c-a335-9a974ea98801 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.507060563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.518897579Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2d1de8f872f58c63d77f1156df167d5a1ee51dc362a7237023ddfe36cc446174 UID:361ecc55-6296-4f19-ba72-adde33ca680f NetNS:/var/run/netns/ae6b682a-dbf3-437d-8167-e8375c42d8d6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005172a8}] Aliases:map[]}"
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.518966811Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.534403867Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2d1de8f872f58c63d77f1156df167d5a1ee51dc362a7237023ddfe36cc446174 UID:361ecc55-6296-4f19-ba72-adde33ca680f NetNS:/var/run/netns/ae6b682a-dbf3-437d-8167-e8375c42d8d6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005172a8}] Aliases:map[]}"
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.534611071Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.535893534Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.537160736Z" level=info msg="Ran pod sandbox 2d1de8f872f58c63d77f1156df167d5a1ee51dc362a7237023ddfe36cc446174 with infra container: default/busybox/POD" id=dc196aa2-21c4-4f3c-a335-9a974ea98801 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.538675008Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bfe2c544-7bb2-424f-8035-fa66b547d7a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.538813084Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bfe2c544-7bb2-424f-8035-fa66b547d7a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.538853695Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=bfe2c544-7bb2-424f-8035-fa66b547d7a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.539769178Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=85bc51fb-f98f-4e5d-9bf4-750b057b7b54 name=/runtime.v1.ImageService/PullImage
	Dec 27 20:27:13 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:13.541528497Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 20:27:14 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:14.182377263Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=85bc51fb-f98f-4e5d-9bf4-750b057b7b54 name=/runtime.v1.ImageService/PullImage
	Dec 27 20:27:14 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:14.18340883Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9519b804-98d4-4fb0-b2f1-62859214a7e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:27:14 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:14.18511152Z" level=info msg="Creating container: default/busybox/busybox" id=db7d56ac-bef2-49c6-874d-861ab2fb5f5a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:27:14 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:14.185252542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:27:14 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:14.190112599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:27:14 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:14.190505654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:27:14 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:14.215392527Z" level=info msg="Created container bfdbdf1ee953af7a1a76a5855a5f2e5c84b26d96ad3b782eb8baef3092f7ff0a: default/busybox/busybox" id=db7d56ac-bef2-49c6-874d-861ab2fb5f5a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:27:14 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:14.215980193Z" level=info msg="Starting container: bfdbdf1ee953af7a1a76a5855a5f2e5c84b26d96ad3b782eb8baef3092f7ff0a" id=09a6af7d-a114-4d53-b67f-ca08464656e4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:27:14 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:14.21753922Z" level=info msg="Started container" PID=2218 containerID=bfdbdf1ee953af7a1a76a5855a5f2e5c84b26d96ad3b782eb8baef3092f7ff0a description=default/busybox/busybox id=09a6af7d-a114-4d53-b67f-ca08464656e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d1de8f872f58c63d77f1156df167d5a1ee51dc362a7237023ddfe36cc446174
	Dec 27 20:27:21 old-k8s-version-762177 crio[777]: time="2025-12-27T20:27:21.29311729Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	bfdbdf1ee953a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   2d1de8f872f58       busybox                                          default
	6fde638d64a85       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   401352d97d015       coredns-5dd5756b68-lklgt                         kube-system
	c3d11db5a058d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   c327783734817       storage-provisioner                              kube-system
	558cc9a5b1c0d       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   10cb5835c1b8d       kindnet-89clv                                    kube-system
	99a9e412f59f3       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   62ac3218de066       kube-proxy-99q8t                                 kube-system
	9beb16b56a07e       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   9253b76e46893       kube-scheduler-old-k8s-version-762177            kube-system
	5af58efcc8290       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   541e587145d85       etcd-old-k8s-version-762177                      kube-system
	3d51fa6cac0fe       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   2060228ec89cd       kube-apiserver-old-k8s-version-762177            kube-system
	5acd83f5cf5b5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   1b6db6ca6ffaa       kube-controller-manager-old-k8s-version-762177   kube-system
	
	
	==> coredns [6fde638d64a8538e5d1060db32cdf062283998597675375b1737492d9e419826] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55305 - 18946 "HINFO IN 102968855790872616.8207038389254125681. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.096998983s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-762177
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-762177
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=old-k8s-version-762177
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_26_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:26:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-762177
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:27:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:27:14 +0000   Sat, 27 Dec 2025 20:26:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:27:14 +0000   Sat, 27 Dec 2025 20:26:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:27:14 +0000   Sat, 27 Dec 2025 20:26:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:27:14 +0000   Sat, 27 Dec 2025 20:27:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-762177
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                81258586-7f74-4e22-8b3b-4eafa1fc89ef
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-lklgt                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-762177                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-89clv                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-762177             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-762177    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-99q8t                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-762177             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s   kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s   kubelet          Node old-k8s-version-762177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s   kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-762177 event: Registered Node old-k8s-version-762177 in Controller
	  Normal  NodeReady                12s   kubelet          Node old-k8s-version-762177 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [5af58efcc82907e560a8394ada9b173ac4bca388750994c65d1902812de28e3c] <==
	{"level":"info","ts":"2025-12-27T20:26:39.157371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-27T20:26:39.157486Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-27T20:26:39.158812Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:26:39.158949Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:26:39.158997Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:26:39.159027Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:26:39.159096Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:26:40.045506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T20:26:40.045549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T20:26:40.045577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-27T20:26:40.045595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:26:40.045601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T20:26:40.045609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:26:40.045616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T20:26:40.046266Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:26:40.046711Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:26:40.046711Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-762177 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:26:40.046749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:26:40.046877Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:26:40.047042Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:26:40.047073Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:26:40.046897Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:26:40.047096Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:26:40.048167Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:26:40.048196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 20:27:22 up  1:09,  0 user,  load average: 3.44, 3.10, 2.14
	Linux old-k8s-version-762177 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [558cc9a5b1c0d6a6e7adc119a29d55531a47dbceb4f558f8fb95fddd9766cfb6] <==
	I1227 20:26:59.485997       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:26:59.486354       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1227 20:26:59.486562       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:26:59.486593       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:26:59.486621       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:26:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:26:59.692603       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:26:59.693321       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:26:59.693444       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:26:59.693596       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:27:00.085840       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:27:00.086070       1 metrics.go:72] Registering metrics
	I1227 20:27:00.086288       1 controller.go:711] "Syncing nftables rules"
	I1227 20:27:09.700006       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:27:09.700061       1 main.go:301] handling current node
	I1227 20:27:19.696285       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:27:19.696325       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3d51fa6cac0fe14de74cfc7c5a38b33791decb278e520bad1d7cd93e7f088afd] <==
	I1227 20:26:41.215547       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:26:41.216012       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 20:26:41.216047       1 aggregator.go:166] initial CRD sync complete...
	I1227 20:26:41.216055       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 20:26:41.216060       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:26:41.216067       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:26:41.216953       1 controller.go:624] quota admission added evaluator for: namespaces
	E1227 20:26:41.218584       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1227 20:26:41.422782       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:26:42.119066       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1227 20:26:42.122362       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:26:42.122381       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 20:26:42.467163       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:26:42.499879       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:26:42.624756       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:26:42.631522       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1227 20:26:42.632516       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 20:26:42.636476       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:26:43.172880       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 20:26:43.866331       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 20:26:43.876694       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:26:43.887201       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1227 20:26:56.080052       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1227 20:26:57.031895       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1227 20:26:57.031895       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5acd83f5cf5b5907aec30dca279a6066912fb46f8568a247c948383e753696b8] <==
	I1227 20:26:56.188625       1 shared_informer.go:318] Caches are synced for attach detach
	I1227 20:26:56.204315       1 shared_informer.go:318] Caches are synced for persistent volume
	I1227 20:26:56.221780       1 shared_informer.go:318] Caches are synced for GC
	I1227 20:26:56.236254       1 shared_informer.go:318] Caches are synced for daemon sets
	I1227 20:26:56.560966       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:26:56.572070       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:26:56.572096       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 20:26:56.735170       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fqcww"
	I1227 20:26:56.740259       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lklgt"
	I1227 20:26:56.748338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="665.075546ms"
	I1227 20:26:56.754991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.606177ms"
	I1227 20:26:56.755080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.616µs"
	I1227 20:26:56.757791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.145µs"
	I1227 20:26:57.043500       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-89clv"
	I1227 20:26:57.049841       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-99q8t"
	I1227 20:26:57.396266       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1227 20:26:57.414808       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-fqcww"
	I1227 20:26:57.421056       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.489674ms"
	I1227 20:26:57.427172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.057545ms"
	I1227 20:26:57.427275       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.167µs"
	I1227 20:27:10.246108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.761µs"
	I1227 20:27:10.257221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.407µs"
	I1227 20:27:11.056447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.132972ms"
	I1227 20:27:11.056568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.311µs"
	I1227 20:27:11.177756       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [99a9e412f59f32a963a982da59313ccef63ce460f82de1971d9e2e6033362630] <==
	I1227 20:26:57.487544       1 server_others.go:69] "Using iptables proxy"
	I1227 20:26:57.498118       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1227 20:26:57.518276       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:26:57.521354       1 server_others.go:152] "Using iptables Proxier"
	I1227 20:26:57.521395       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 20:26:57.521405       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 20:26:57.521435       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 20:26:57.521723       1 server.go:846] "Version info" version="v1.28.0"
	I1227 20:26:57.521739       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:26:57.522361       1 config.go:97] "Starting endpoint slice config controller"
	I1227 20:26:57.522390       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 20:26:57.522428       1 config.go:188] "Starting service config controller"
	I1227 20:26:57.522425       1 config.go:315] "Starting node config controller"
	I1227 20:26:57.522445       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 20:26:57.522433       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 20:26:57.622976       1 shared_informer.go:318] Caches are synced for node config
	I1227 20:26:57.623008       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 20:26:57.623017       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [9beb16b56a07e2e6dc085b9fbb0f60ed57ed45ea8dce72b0ee46a85d564c6e80] <==
	E1227 20:26:41.174653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 20:26:41.174667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1227 20:26:41.174711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1227 20:26:41.174466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1227 20:26:41.174737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1227 20:26:41.174803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1227 20:26:41.174826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1227 20:26:41.174907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1227 20:26:41.174769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1227 20:26:41.175025       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1227 20:26:41.175268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1227 20:26:41.175329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1227 20:26:41.175270       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1227 20:26:41.175401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1227 20:26:42.053279       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1227 20:26:42.053309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1227 20:26:42.115013       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1227 20:26:42.115051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 20:26:42.196371       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1227 20:26:42.196418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1227 20:26:42.204541       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1227 20:26:42.204583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1227 20:26:42.237969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1227 20:26:42.238009       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1227 20:26:42.770483       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 20:26:56 old-k8s-version-762177 kubelet[1393]: I1227 20:26:56.218520    1393 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 27 20:26:56 old-k8s-version-762177 kubelet[1393]: I1227 20:26:56.219320    1393 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.063330    1393 topology_manager.go:215] "Topology Admit Handler" podUID="93c4fc0d-f1ce-41d7-9c7b-17ed626306c7" podNamespace="kube-system" podName="kindnet-89clv"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.071767    1393 topology_manager.go:215] "Topology Admit Handler" podUID="eadfafbe-2007-418a-a8bd-85fa704db6b6" podNamespace="kube-system" podName="kube-proxy-99q8t"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.134691    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv5fd\" (UniqueName: \"kubernetes.io/projected/93c4fc0d-f1ce-41d7-9c7b-17ed626306c7-kube-api-access-hv5fd\") pod \"kindnet-89clv\" (UID: \"93c4fc0d-f1ce-41d7-9c7b-17ed626306c7\") " pod="kube-system/kindnet-89clv"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.134938    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eadfafbe-2007-418a-a8bd-85fa704db6b6-lib-modules\") pod \"kube-proxy-99q8t\" (UID: \"eadfafbe-2007-418a-a8bd-85fa704db6b6\") " pod="kube-system/kube-proxy-99q8t"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.134984    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93c4fc0d-f1ce-41d7-9c7b-17ed626306c7-xtables-lock\") pod \"kindnet-89clv\" (UID: \"93c4fc0d-f1ce-41d7-9c7b-17ed626306c7\") " pod="kube-system/kindnet-89clv"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.135013    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93c4fc0d-f1ce-41d7-9c7b-17ed626306c7-lib-modules\") pod \"kindnet-89clv\" (UID: \"93c4fc0d-f1ce-41d7-9c7b-17ed626306c7\") " pod="kube-system/kindnet-89clv"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.135052    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eadfafbe-2007-418a-a8bd-85fa704db6b6-kube-proxy\") pod \"kube-proxy-99q8t\" (UID: \"eadfafbe-2007-418a-a8bd-85fa704db6b6\") " pod="kube-system/kube-proxy-99q8t"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.135088    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/93c4fc0d-f1ce-41d7-9c7b-17ed626306c7-cni-cfg\") pod \"kindnet-89clv\" (UID: \"93c4fc0d-f1ce-41d7-9c7b-17ed626306c7\") " pod="kube-system/kindnet-89clv"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.135120    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eadfafbe-2007-418a-a8bd-85fa704db6b6-xtables-lock\") pod \"kube-proxy-99q8t\" (UID: \"eadfafbe-2007-418a-a8bd-85fa704db6b6\") " pod="kube-system/kube-proxy-99q8t"
	Dec 27 20:26:57 old-k8s-version-762177 kubelet[1393]: I1227 20:26:57.135150    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pskbv\" (UniqueName: \"kubernetes.io/projected/eadfafbe-2007-418a-a8bd-85fa704db6b6-kube-api-access-pskbv\") pod \"kube-proxy-99q8t\" (UID: \"eadfafbe-2007-418a-a8bd-85fa704db6b6\") " pod="kube-system/kube-proxy-99q8t"
	Dec 27 20:26:58 old-k8s-version-762177 kubelet[1393]: I1227 20:26:58.007491    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-99q8t" podStartSLOduration=1.007442203 podCreationTimestamp="2025-12-27 20:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:26:58.007444495 +0000 UTC m=+14.168703585" watchObservedRunningTime="2025-12-27 20:26:58.007442203 +0000 UTC m=+14.168701295"
	Dec 27 20:27:00 old-k8s-version-762177 kubelet[1393]: I1227 20:27:00.015903    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-89clv" podStartSLOduration=1.165695824 podCreationTimestamp="2025-12-27 20:26:57 +0000 UTC" firstStartedPulling="2025-12-27 20:26:57.374474357 +0000 UTC m=+13.535733441" lastFinishedPulling="2025-12-27 20:26:59.224631865 +0000 UTC m=+15.385890947" observedRunningTime="2025-12-27 20:27:00.015656736 +0000 UTC m=+16.176915827" watchObservedRunningTime="2025-12-27 20:27:00.01585333 +0000 UTC m=+16.177112421"
	Dec 27 20:27:10 old-k8s-version-762177 kubelet[1393]: I1227 20:27:10.221443    1393 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 27 20:27:10 old-k8s-version-762177 kubelet[1393]: I1227 20:27:10.246485    1393 topology_manager.go:215] "Topology Admit Handler" podUID="022c6c7c-4655-42a5-8b6f-390cdb0e7623" podNamespace="kube-system" podName="coredns-5dd5756b68-lklgt"
	Dec 27 20:27:10 old-k8s-version-762177 kubelet[1393]: I1227 20:27:10.246769    1393 topology_manager.go:215] "Topology Admit Handler" podUID="58875037-9be0-4a9d-b2e6-7deb9514ad33" podNamespace="kube-system" podName="storage-provisioner"
	Dec 27 20:27:10 old-k8s-version-762177 kubelet[1393]: I1227 20:27:10.329877    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58875037-9be0-4a9d-b2e6-7deb9514ad33-tmp\") pod \"storage-provisioner\" (UID: \"58875037-9be0-4a9d-b2e6-7deb9514ad33\") " pod="kube-system/storage-provisioner"
	Dec 27 20:27:10 old-k8s-version-762177 kubelet[1393]: I1227 20:27:10.329956    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/022c6c7c-4655-42a5-8b6f-390cdb0e7623-config-volume\") pod \"coredns-5dd5756b68-lklgt\" (UID: \"022c6c7c-4655-42a5-8b6f-390cdb0e7623\") " pod="kube-system/coredns-5dd5756b68-lklgt"
	Dec 27 20:27:10 old-k8s-version-762177 kubelet[1393]: I1227 20:27:10.330045    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7d4r\" (UniqueName: \"kubernetes.io/projected/58875037-9be0-4a9d-b2e6-7deb9514ad33-kube-api-access-f7d4r\") pod \"storage-provisioner\" (UID: \"58875037-9be0-4a9d-b2e6-7deb9514ad33\") " pod="kube-system/storage-provisioner"
	Dec 27 20:27:10 old-k8s-version-762177 kubelet[1393]: I1227 20:27:10.330171    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqxkg\" (UniqueName: \"kubernetes.io/projected/022c6c7c-4655-42a5-8b6f-390cdb0e7623-kube-api-access-gqxkg\") pod \"coredns-5dd5756b68-lklgt\" (UID: \"022c6c7c-4655-42a5-8b6f-390cdb0e7623\") " pod="kube-system/coredns-5dd5756b68-lklgt"
	Dec 27 20:27:11 old-k8s-version-762177 kubelet[1393]: I1227 20:27:11.037239    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.037182492 podCreationTimestamp="2025-12-27 20:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:27:11.036766958 +0000 UTC m=+27.198026048" watchObservedRunningTime="2025-12-27 20:27:11.037182492 +0000 UTC m=+27.198441585"
	Dec 27 20:27:11 old-k8s-version-762177 kubelet[1393]: I1227 20:27:11.048810    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lklgt" podStartSLOduration=15.048758751 podCreationTimestamp="2025-12-27 20:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:27:11.048386814 +0000 UTC m=+27.209645905" watchObservedRunningTime="2025-12-27 20:27:11.048758751 +0000 UTC m=+27.210017836"
	Dec 27 20:27:13 old-k8s-version-762177 kubelet[1393]: I1227 20:27:13.204237    1393 topology_manager.go:215] "Topology Admit Handler" podUID="361ecc55-6296-4f19-ba72-adde33ca680f" podNamespace="default" podName="busybox"
	Dec 27 20:27:13 old-k8s-version-762177 kubelet[1393]: I1227 20:27:13.248502    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnw22\" (UniqueName: \"kubernetes.io/projected/361ecc55-6296-4f19-ba72-adde33ca680f-kube-api-access-pnw22\") pod \"busybox\" (UID: \"361ecc55-6296-4f19-ba72-adde33ca680f\") " pod="default/busybox"
	
	
	==> storage-provisioner [c3d11db5a058dbc67d025c8ed72e2f99e218acbe225795c8b03c371ecc21f2ab] <==
	I1227 20:27:10.618286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:27:10.626467       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:27:10.626521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 20:27:10.635063       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:27:10.635324       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762177_765331c9-a2e5-473c-8cda-3a9977b2fab0!
	I1227 20:27:10.635329       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e92b6e7a-16bc-4c05-885c-17e1f4060299", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-762177_765331c9-a2e5-473c-8cda-3a9977b2fab0 became leader
	I1227 20:27:10.736565       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762177_765331c9-a2e5-473c-8cda-3a9977b2fab0!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-762177 -n old-k8s-version-762177
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-762177 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.444317ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:27:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-014435 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-014435 describe deploy/metrics-server -n kube-system: exit status 1 (65.542536ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-014435 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-014435
helpers_test.go:244: (dbg) docker inspect no-preload-014435:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091",
	        "Created": "2025-12-27T20:26:44.562734517Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 305856,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:26:44.594386136Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/hosts",
	        "LogPath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091-json.log",
	        "Name": "/no-preload-014435",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-014435:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-014435",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091",
	                "LowerDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373/merged",
	                "UpperDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373/diff",
	                "WorkDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-014435",
	                "Source": "/var/lib/docker/volumes/no-preload-014435/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-014435",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-014435",
	                "name.minikube.sigs.k8s.io": "no-preload-014435",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cc929200601dbc0166fa67e58864ac9512e3de7fde2a88b0f0efd3a25ea7ae46",
	            "SandboxKey": "/var/run/docker/netns/cc929200601d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-014435": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da47a33f1df0e45ac0871af30769ae1b8230bf0f77cd43d071316f15c5ec0145",
	                    "EndpointID": "7c3acd0106aec5e563bbcc052f17efc2f2b2846fdd7992431f888809acf46c2e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "76:5d:bd:b2:e2:f8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-014435",
	                        "8d514d0c2855"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014435 -n no-preload-014435
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-014435 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-014435 logs -n 25: (1.199535665s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-436655 sudo systemctl cat kubelet --no-pager                                                                                  │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo journalctl -xeu kubelet --all --full --no-pager                                                                   │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/kubernetes/kubelet.conf                                                                                  │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /var/lib/kubelet/config.yaml                                                                                  │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl status docker --all --full --no-pager                                                                   │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl cat docker --no-pager                                                                                   │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/docker/daemon.json                                                                                       │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo docker system info                                                                                                │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl status cri-docker --all --full --no-pager                                                               │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl cat cri-docker --no-pager                                                                               │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                          │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                    │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cri-dockerd --version                                                                                             │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl status containerd --all --full --no-pager                                                               │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl cat containerd --no-pager                                                                               │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /lib/systemd/system/containerd.service                                                                        │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/containerd/config.toml                                                                                   │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo containerd config dump                                                                                            │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl status crio --all --full --no-pager                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl cat crio --no-pager                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                           │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo crio config                                                                                                       │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ delete  │ -p bridge-436655                                                                                                                        │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ delete  │ -p disable-driver-mounts-541137                                                                                                         │ disable-driver-mounts-541137 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:27:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:27:23.725006  316262 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:27:23.725106  316262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:27:23.725118  316262 out.go:374] Setting ErrFile to fd 2...
	I1227 20:27:23.725125  316262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:27:23.725302  316262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:27:23.725791  316262 out.go:368] Setting JSON to false
	I1227 20:27:23.726984  316262 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4193,"bootTime":1766863051,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:27:23.727042  316262 start.go:143] virtualization: kvm guest
	I1227 20:27:23.728923  316262 out.go:179] * [embed-certs-820583] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:27:23.730182  316262 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:27:23.730220  316262 notify.go:221] Checking for updates...
	I1227 20:27:23.732838  316262 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:27:23.733977  316262 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:27:23.735042  316262 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:27:23.736226  316262 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:27:23.737221  316262 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:27:23.738810  316262 config.go:182] Loaded profile config "bridge-436655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:27:23.738998  316262 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:27:23.739196  316262 config.go:182] Loaded profile config "old-k8s-version-762177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:27:23.739321  316262 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:27:23.764173  316262 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:27:23.764276  316262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:27:23.842107  316262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-27 20:27:23.830078808 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:27:23.842247  316262 docker.go:319] overlay module found
	I1227 20:27:23.843693  316262 out.go:179] * Using the docker driver based on user configuration
	I1227 20:27:23.844746  316262 start.go:309] selected driver: docker
	I1227 20:27:23.844764  316262 start.go:928] validating driver "docker" against <nil>
	I1227 20:27:23.844780  316262 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:27:23.845574  316262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:27:23.902626  316262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-27 20:27:23.893698064 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:27:23.902778  316262 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:27:23.902985  316262 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:27:23.904548  316262 out.go:179] * Using Docker driver with root privileges
	I1227 20:27:23.905626  316262 cni.go:84] Creating CNI manager for ""
	I1227 20:27:23.905686  316262 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:27:23.905696  316262 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:27:23.905751  316262 start.go:353] cluster config:
	{Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:27:23.906874  316262 out.go:179] * Starting "embed-certs-820583" primary control-plane node in "embed-certs-820583" cluster
	I1227 20:27:23.907869  316262 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:27:23.908892  316262 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:27:23.909787  316262 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:27:23.909818  316262 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:27:23.909828  316262 cache.go:65] Caching tarball of preloaded images
	I1227 20:27:23.909891  316262 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:27:23.909907  316262 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:27:23.909951  316262 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:27:23.910024  316262 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/config.json ...
	I1227 20:27:23.910049  316262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/config.json: {Name:mk56a0b7c7e986b7d7f5260105eed2ec4cd8a6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:23.929618  316262 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:27:23.929634  316262 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:27:23.929648  316262 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:27:23.929676  316262 start.go:360] acquireMachinesLock for embed-certs-820583: {Name:mk01eaa0328a4f3967965b40089a5a188a2ca888 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:23.929757  316262 start.go:364] duration metric: took 67.457µs to acquireMachinesLock for "embed-certs-820583"
	I1227 20:27:23.929778  316262 start.go:93] Provisioning new machine with config: &{Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:27:23.929840  316262 start.go:125] createHost starting for "" (driver="docker")
	W1227 20:27:23.807739  305296 node_ready.go:57] node "no-preload-014435" has "Ready":"False" status (will retry)
	W1227 20:27:26.304050  305296 node_ready.go:57] node "no-preload-014435" has "Ready":"False" status (will retry)
	I1227 20:27:26.802878  305296 node_ready.go:49] node "no-preload-014435" is "Ready"
	I1227 20:27:26.802904  305296 node_ready.go:38] duration metric: took 13.002603973s for node "no-preload-014435" to be "Ready" ...
	I1227 20:27:26.802940  305296 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:27:26.802979  305296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:27:26.814451  305296 api_server.go:72] duration metric: took 13.366859253s to wait for apiserver process to appear ...
	I1227 20:27:26.814481  305296 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:27:26.814505  305296 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 20:27:26.818514  305296 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1227 20:27:26.819496  305296 api_server.go:141] control plane version: v1.35.0
	I1227 20:27:26.819516  305296 api_server.go:131] duration metric: took 5.02933ms to wait for apiserver health ...
	I1227 20:27:26.819529  305296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:27:26.822420  305296 system_pods.go:59] 8 kube-system pods found
	I1227 20:27:26.822455  305296 system_pods.go:61] "coredns-7d764666f9-nvrq6" [ca55daec-8a25-48b5-ace0-eeb5441b6174] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:26.822462  305296 system_pods.go:61] "etcd-no-preload-014435" [efdf1420-2f4c-4882-8706-04e34727a803] Running
	I1227 20:27:26.822475  305296 system_pods.go:61] "kindnet-7pgwz" [a1f9fadf-b5dd-472d-bffe-f8a555aa44c9] Running
	I1227 20:27:26.822479  305296 system_pods.go:61] "kube-apiserver-no-preload-014435" [6a20b877-1c1b-4f46-b4dc-32a6ed3a3b68] Running
	I1227 20:27:26.822486  305296 system_pods.go:61] "kube-controller-manager-no-preload-014435" [e9d451ba-0129-47a3-b703-df1bfd83631d] Running
	I1227 20:27:26.822490  305296 system_pods.go:61] "kube-proxy-ctvzq" [8db29263-ce40-4df9-9316-781104ff2dd5] Running
	I1227 20:27:26.822499  305296 system_pods.go:61] "kube-scheduler-no-preload-014435" [7a4391d1-1151-47d8-9f3e-3be45844ec85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:27:26.822511  305296 system_pods.go:61] "storage-provisioner" [dcd68309-2ed4-4177-b826-fe8649b75bbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:26.822516  305296 system_pods.go:74] duration metric: took 2.982117ms to wait for pod list to return data ...
	I1227 20:27:26.822528  305296 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:27:26.824348  305296 default_sa.go:45] found service account: "default"
	I1227 20:27:26.824363  305296 default_sa.go:55] duration metric: took 1.828386ms for default service account to be created ...
	I1227 20:27:26.824370  305296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:27:26.827040  305296 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:26.827067  305296 system_pods.go:89] "coredns-7d764666f9-nvrq6" [ca55daec-8a25-48b5-ace0-eeb5441b6174] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:26.827075  305296 system_pods.go:89] "etcd-no-preload-014435" [efdf1420-2f4c-4882-8706-04e34727a803] Running
	I1227 20:27:26.827083  305296 system_pods.go:89] "kindnet-7pgwz" [a1f9fadf-b5dd-472d-bffe-f8a555aa44c9] Running
	I1227 20:27:26.827090  305296 system_pods.go:89] "kube-apiserver-no-preload-014435" [6a20b877-1c1b-4f46-b4dc-32a6ed3a3b68] Running
	I1227 20:27:26.827100  305296 system_pods.go:89] "kube-controller-manager-no-preload-014435" [e9d451ba-0129-47a3-b703-df1bfd83631d] Running
	I1227 20:27:26.827109  305296 system_pods.go:89] "kube-proxy-ctvzq" [8db29263-ce40-4df9-9316-781104ff2dd5] Running
	I1227 20:27:26.827117  305296 system_pods.go:89] "kube-scheduler-no-preload-014435" [7a4391d1-1151-47d8-9f3e-3be45844ec85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:27:26.827129  305296 system_pods.go:89] "storage-provisioner" [dcd68309-2ed4-4177-b826-fe8649b75bbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:26.827156  305296 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 20:27:27.129709  305296 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:27.129744  305296 system_pods.go:89] "coredns-7d764666f9-nvrq6" [ca55daec-8a25-48b5-ace0-eeb5441b6174] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:27.129754  305296 system_pods.go:89] "etcd-no-preload-014435" [efdf1420-2f4c-4882-8706-04e34727a803] Running
	I1227 20:27:27.129762  305296 system_pods.go:89] "kindnet-7pgwz" [a1f9fadf-b5dd-472d-bffe-f8a555aa44c9] Running
	I1227 20:27:27.129768  305296 system_pods.go:89] "kube-apiserver-no-preload-014435" [6a20b877-1c1b-4f46-b4dc-32a6ed3a3b68] Running
	I1227 20:27:27.129773  305296 system_pods.go:89] "kube-controller-manager-no-preload-014435" [e9d451ba-0129-47a3-b703-df1bfd83631d] Running
	I1227 20:27:27.129778  305296 system_pods.go:89] "kube-proxy-ctvzq" [8db29263-ce40-4df9-9316-781104ff2dd5] Running
	I1227 20:27:27.129785  305296 system_pods.go:89] "kube-scheduler-no-preload-014435" [7a4391d1-1151-47d8-9f3e-3be45844ec85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:27:27.129796  305296 system_pods.go:89] "storage-provisioner" [dcd68309-2ed4-4177-b826-fe8649b75bbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:27.490424  305296 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:27.490463  305296 system_pods.go:89] "coredns-7d764666f9-nvrq6" [ca55daec-8a25-48b5-ace0-eeb5441b6174] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:27.490471  305296 system_pods.go:89] "etcd-no-preload-014435" [efdf1420-2f4c-4882-8706-04e34727a803] Running
	I1227 20:27:27.490481  305296 system_pods.go:89] "kindnet-7pgwz" [a1f9fadf-b5dd-472d-bffe-f8a555aa44c9] Running
	I1227 20:27:27.490487  305296 system_pods.go:89] "kube-apiserver-no-preload-014435" [6a20b877-1c1b-4f46-b4dc-32a6ed3a3b68] Running
	I1227 20:27:27.490493  305296 system_pods.go:89] "kube-controller-manager-no-preload-014435" [e9d451ba-0129-47a3-b703-df1bfd83631d] Running
	I1227 20:27:27.490498  305296 system_pods.go:89] "kube-proxy-ctvzq" [8db29263-ce40-4df9-9316-781104ff2dd5] Running
	I1227 20:27:27.490507  305296 system_pods.go:89] "kube-scheduler-no-preload-014435" [7a4391d1-1151-47d8-9f3e-3be45844ec85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:27:27.490519  305296 system_pods.go:89] "storage-provisioner" [dcd68309-2ed4-4177-b826-fe8649b75bbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:27.988804  305296 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:27.988833  305296 system_pods.go:89] "coredns-7d764666f9-nvrq6" [ca55daec-8a25-48b5-ace0-eeb5441b6174] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:27.988839  305296 system_pods.go:89] "etcd-no-preload-014435" [efdf1420-2f4c-4882-8706-04e34727a803] Running
	I1227 20:27:27.988845  305296 system_pods.go:89] "kindnet-7pgwz" [a1f9fadf-b5dd-472d-bffe-f8a555aa44c9] Running
	I1227 20:27:27.988849  305296 system_pods.go:89] "kube-apiserver-no-preload-014435" [6a20b877-1c1b-4f46-b4dc-32a6ed3a3b68] Running
	I1227 20:27:27.988853  305296 system_pods.go:89] "kube-controller-manager-no-preload-014435" [e9d451ba-0129-47a3-b703-df1bfd83631d] Running
	I1227 20:27:27.988857  305296 system_pods.go:89] "kube-proxy-ctvzq" [8db29263-ce40-4df9-9316-781104ff2dd5] Running
	I1227 20:27:27.988863  305296 system_pods.go:89] "kube-scheduler-no-preload-014435" [7a4391d1-1151-47d8-9f3e-3be45844ec85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:27:27.988868  305296 system_pods.go:89] "storage-provisioner" [dcd68309-2ed4-4177-b826-fe8649b75bbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:28.495639  305296 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:28.495678  305296 system_pods.go:89] "coredns-7d764666f9-nvrq6" [ca55daec-8a25-48b5-ace0-eeb5441b6174] Running
	I1227 20:27:28.495687  305296 system_pods.go:89] "etcd-no-preload-014435" [efdf1420-2f4c-4882-8706-04e34727a803] Running
	I1227 20:27:28.495692  305296 system_pods.go:89] "kindnet-7pgwz" [a1f9fadf-b5dd-472d-bffe-f8a555aa44c9] Running
	I1227 20:27:28.495699  305296 system_pods.go:89] "kube-apiserver-no-preload-014435" [6a20b877-1c1b-4f46-b4dc-32a6ed3a3b68] Running
	I1227 20:27:28.495710  305296 system_pods.go:89] "kube-controller-manager-no-preload-014435" [e9d451ba-0129-47a3-b703-df1bfd83631d] Running
	I1227 20:27:28.495714  305296 system_pods.go:89] "kube-proxy-ctvzq" [8db29263-ce40-4df9-9316-781104ff2dd5] Running
	I1227 20:27:28.495720  305296 system_pods.go:89] "kube-scheduler-no-preload-014435" [7a4391d1-1151-47d8-9f3e-3be45844ec85] Running
	I1227 20:27:28.495723  305296 system_pods.go:89] "storage-provisioner" [dcd68309-2ed4-4177-b826-fe8649b75bbd] Running
	I1227 20:27:28.495733  305296 system_pods.go:126] duration metric: took 1.671356875s to wait for k8s-apps to be running ...
	I1227 20:27:28.495746  305296 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:27:28.495793  305296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:27:28.512622  305296 system_svc.go:56] duration metric: took 16.8656ms WaitForService to wait for kubelet
	I1227 20:27:28.512660  305296 kubeadm.go:587] duration metric: took 15.065070856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:27:28.512685  305296 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:27:28.516190  305296 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:27:28.516218  305296 node_conditions.go:123] node cpu capacity is 8
	I1227 20:27:28.516238  305296 node_conditions.go:105] duration metric: took 3.53761ms to run NodePressure ...
	I1227 20:27:28.516252  305296 start.go:242] waiting for startup goroutines ...
	I1227 20:27:28.516264  305296 start.go:247] waiting for cluster config update ...
	I1227 20:27:28.516274  305296 start.go:256] writing updated cluster config ...
	I1227 20:27:28.516487  305296 ssh_runner.go:195] Run: rm -f paused
	I1227 20:27:28.521121  305296 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:27:28.524761  305296 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nvrq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:28.529184  305296 pod_ready.go:94] pod "coredns-7d764666f9-nvrq6" is "Ready"
	I1227 20:27:28.529209  305296 pod_ready.go:86] duration metric: took 4.425594ms for pod "coredns-7d764666f9-nvrq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:28.531224  305296 pod_ready.go:83] waiting for pod "etcd-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:28.535230  305296 pod_ready.go:94] pod "etcd-no-preload-014435" is "Ready"
	I1227 20:27:28.535259  305296 pod_ready.go:86] duration metric: took 4.013026ms for pod "etcd-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:28.537332  305296 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:28.541132  305296 pod_ready.go:94] pod "kube-apiserver-no-preload-014435" is "Ready"
	I1227 20:27:28.541150  305296 pod_ready.go:86] duration metric: took 3.802293ms for pod "kube-apiserver-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:28.543067  305296 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:23.931389  316262 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:27:23.931615  316262 start.go:159] libmachine.API.Create for "embed-certs-820583" (driver="docker")
	I1227 20:27:23.931649  316262 client.go:173] LocalClient.Create starting
	I1227 20:27:23.931716  316262 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem
	I1227 20:27:23.931756  316262 main.go:144] libmachine: Decoding PEM data...
	I1227 20:27:23.931784  316262 main.go:144] libmachine: Parsing certificate...
	I1227 20:27:23.931861  316262 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem
	I1227 20:27:23.931892  316262 main.go:144] libmachine: Decoding PEM data...
	I1227 20:27:23.931909  316262 main.go:144] libmachine: Parsing certificate...
	I1227 20:27:23.932258  316262 cli_runner.go:164] Run: docker network inspect embed-certs-820583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:27:23.950281  316262 cli_runner.go:211] docker network inspect embed-certs-820583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:27:23.950357  316262 network_create.go:284] running [docker network inspect embed-certs-820583] to gather additional debugging logs...
	I1227 20:27:23.950379  316262 cli_runner.go:164] Run: docker network inspect embed-certs-820583
	W1227 20:27:23.968446  316262 cli_runner.go:211] docker network inspect embed-certs-820583 returned with exit code 1
	I1227 20:27:23.968469  316262 network_create.go:287] error running [docker network inspect embed-certs-820583]: docker network inspect embed-certs-820583: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-820583 not found
	I1227 20:27:23.968482  316262 network_create.go:289] output of [docker network inspect embed-certs-820583]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-820583 not found
	
	** /stderr **
	I1227 20:27:23.968575  316262 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:27:23.985430  316262 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0db0ba8938bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:24:76:f1:9a:26} reservation:<nil>}
	I1227 20:27:23.986134  316262 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11f8d597a005 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:b4:6c:7e:ff:91} reservation:<nil>}
	I1227 20:27:23.987011  316262 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7cf3350a110 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:14:0b:19:b4:4d} reservation:<nil>}
	I1227 20:27:23.987815  316262 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f24610}
	I1227 20:27:23.987847  316262 network_create.go:124] attempt to create docker network embed-certs-820583 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 20:27:23.987883  316262 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-820583 embed-certs-820583
	I1227 20:27:24.034697  316262 network_create.go:108] docker network embed-certs-820583 192.168.76.0/24 created
	I1227 20:27:24.034732  316262 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-820583" container
	I1227 20:27:24.034805  316262 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:27:24.051958  316262 cli_runner.go:164] Run: docker volume create embed-certs-820583 --label name.minikube.sigs.k8s.io=embed-certs-820583 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:27:24.068931  316262 oci.go:103] Successfully created a docker volume embed-certs-820583
	I1227 20:27:24.069010  316262 cli_runner.go:164] Run: docker run --rm --name embed-certs-820583-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-820583 --entrypoint /usr/bin/test -v embed-certs-820583:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:27:24.473148  316262 oci.go:107] Successfully prepared a docker volume embed-certs-820583
	I1227 20:27:24.473224  316262 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:27:24.473247  316262 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:27:24.473305  316262 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-820583:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:27:28.374978  316262 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-820583:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.901628138s)
	I1227 20:27:28.375013  316262 kic.go:203] duration metric: took 3.901762336s to extract preloaded images to volume ...
	W1227 20:27:28.375109  316262 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 20:27:28.375150  316262 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 20:27:28.375213  316262 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:27:28.454497  316262 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-820583 --name embed-certs-820583 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-820583 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-820583 --network embed-certs-820583 --ip 192.168.76.2 --volume embed-certs-820583:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:27:28.925418  305296 pod_ready.go:94] pod "kube-controller-manager-no-preload-014435" is "Ready"
	I1227 20:27:28.925451  305296 pod_ready.go:86] duration metric: took 382.365428ms for pod "kube-controller-manager-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:29.128073  305296 pod_ready.go:83] waiting for pod "kube-proxy-ctvzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:29.525891  305296 pod_ready.go:94] pod "kube-proxy-ctvzq" is "Ready"
	I1227 20:27:29.525975  305296 pod_ready.go:86] duration metric: took 397.873289ms for pod "kube-proxy-ctvzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:29.725458  305296 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:30.126062  305296 pod_ready.go:94] pod "kube-scheduler-no-preload-014435" is "Ready"
	I1227 20:27:30.126093  305296 pod_ready.go:86] duration metric: took 400.607143ms for pod "kube-scheduler-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:30.126108  305296 pod_ready.go:40] duration metric: took 1.604945712s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:27:30.176787  305296 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:27:30.178394  305296 out.go:179] * Done! kubectl is now configured to use "no-preload-014435" cluster and "default" namespace by default
	I1227 20:27:28.757393  316262 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Running}}
	I1227 20:27:28.777391  316262 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:27:28.797232  316262 cli_runner.go:164] Run: docker exec embed-certs-820583 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:27:28.849769  316262 oci.go:144] the created container "embed-certs-820583" has a running status.
	I1227 20:27:28.849796  316262 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa...
	I1227 20:27:28.945408  316262 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:27:28.973515  316262 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:27:28.995153  316262 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:27:28.995179  316262 kic_runner.go:114] Args: [docker exec --privileged embed-certs-820583 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:27:29.046107  316262 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:27:29.077284  316262 machine.go:94] provisionDockerMachine start ...
	I1227 20:27:29.077573  316262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:27:29.111459  316262 main.go:144] libmachine: Using SSH client type: native
	I1227 20:27:29.112251  316262 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1227 20:27:29.112405  316262 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:27:29.252796  316262 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-820583
	
	I1227 20:27:29.252829  316262 ubuntu.go:182] provisioning hostname "embed-certs-820583"
	I1227 20:27:29.252890  316262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:27:29.273070  316262 main.go:144] libmachine: Using SSH client type: native
	I1227 20:27:29.273394  316262 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1227 20:27:29.273417  316262 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-820583 && echo "embed-certs-820583" | sudo tee /etc/hostname
	I1227 20:27:29.409563  316262 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-820583
	
	I1227 20:27:29.409638  316262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:27:29.430084  316262 main.go:144] libmachine: Using SSH client type: native
	I1227 20:27:29.433090  316262 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1227 20:27:29.433125  316262 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-820583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-820583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-820583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:27:29.563457  316262 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:27:29.563492  316262 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:27:29.563549  316262 ubuntu.go:190] setting up certificates
	I1227 20:27:29.563574  316262 provision.go:84] configureAuth start
	I1227 20:27:29.563638  316262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-820583
	I1227 20:27:29.582048  316262 provision.go:143] copyHostCerts
	I1227 20:27:29.582112  316262 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:27:29.582127  316262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:27:29.582196  316262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:27:29.582338  316262 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:27:29.582350  316262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:27:29.582390  316262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:27:29.582473  316262 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:27:29.582483  316262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:27:29.582517  316262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:27:29.582585  316262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-820583 san=[127.0.0.1 192.168.76.2 embed-certs-820583 localhost minikube]
	I1227 20:27:29.722727  316262 provision.go:177] copyRemoteCerts
	I1227 20:27:29.722782  316262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:27:29.722837  316262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:27:29.742558  316262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:27:29.837549  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:27:29.858034  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:27:29.875905  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:27:29.893071  316262 provision.go:87] duration metric: took 329.47298ms to configureAuth
	I1227 20:27:29.893097  316262 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:27:29.893258  316262 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:27:29.893371  316262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:27:29.911563  316262 main.go:144] libmachine: Using SSH client type: native
	I1227 20:27:29.911762  316262 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1227 20:27:29.911778  316262 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:27:30.193042  316262 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:27:30.193068  316262 machine.go:97] duration metric: took 1.115750803s to provisionDockerMachine
	I1227 20:27:30.193080  316262 client.go:176] duration metric: took 6.261423616s to LocalClient.Create
	I1227 20:27:30.193096  316262 start.go:167] duration metric: took 6.261480688s to libmachine.API.Create "embed-certs-820583"
	I1227 20:27:30.193104  316262 start.go:293] postStartSetup for "embed-certs-820583" (driver="docker")
	I1227 20:27:30.193117  316262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:27:30.193180  316262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:27:30.193227  316262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:27:30.215702  316262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:27:30.309676  316262 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:27:30.313649  316262 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:27:30.313682  316262 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:27:30.313695  316262 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:27:30.313760  316262 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:27:30.313869  316262 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:27:30.314023  316262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:27:30.322565  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:27:30.346597  316262 start.go:296] duration metric: took 153.478192ms for postStartSetup
	I1227 20:27:30.346984  316262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-820583
	I1227 20:27:30.368267  316262 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/config.json ...
	I1227 20:27:30.368575  316262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:27:30.368623  316262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:27:30.386326  316262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:27:30.476687  316262 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:27:30.482037  316262 start.go:128] duration metric: took 6.552184051s to createHost
	I1227 20:27:30.482062  316262 start.go:83] releasing machines lock for "embed-certs-820583", held for 6.552293933s
	I1227 20:27:30.482151  316262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-820583
	I1227 20:27:30.501183  316262 ssh_runner.go:195] Run: cat /version.json
	I1227 20:27:30.501241  316262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:27:30.501261  316262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:27:30.501352  316262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:27:30.522610  316262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:27:30.523057  316262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:27:30.613746  316262 ssh_runner.go:195] Run: systemctl --version
	I1227 20:27:30.684147  316262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:27:30.717393  316262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:27:30.721763  316262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:27:30.721842  316262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:27:30.745711  316262 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 20:27:30.745734  316262 start.go:496] detecting cgroup driver to use...
	I1227 20:27:30.745766  316262 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:27:30.745800  316262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:27:30.761996  316262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:27:30.774620  316262 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:27:30.774660  316262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:27:30.790718  316262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:27:30.808243  316262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:27:30.897032  316262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:27:30.994431  316262 docker.go:234] disabling docker service ...
	I1227 20:27:30.994484  316262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:27:31.012652  316262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:27:31.024399  316262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:27:31.109944  316262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:27:31.217948  316262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:27:31.234366  316262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:27:31.251679  316262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:27:31.251731  316262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:27:31.267976  316262 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:27:31.268030  316262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:27:31.278512  316262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:27:31.288382  316262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:27:31.298345  316262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:27:31.306789  316262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:27:31.317839  316262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:27:31.331618  316262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:27:31.341247  316262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:27:31.349772  316262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:27:31.357344  316262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:27:31.447175  316262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:27:31.596481  316262 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:27:31.596547  316262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:27:31.601067  316262 start.go:574] Will wait 60s for crictl version
	I1227 20:27:31.601125  316262 ssh_runner.go:195] Run: which crictl
	I1227 20:27:31.604892  316262 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:27:31.631661  316262 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:27:31.631740  316262 ssh_runner.go:195] Run: crio --version
	I1227 20:27:31.659629  316262 ssh_runner.go:195] Run: crio --version
	I1227 20:27:31.689601  316262 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:27:31.690662  316262 cli_runner.go:164] Run: docker network inspect embed-certs-820583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:27:31.709059  316262 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:27:31.713093  316262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:27:31.723384  316262 kubeadm.go:884] updating cluster {Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:27:31.723508  316262 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:27:31.723573  316262 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:27:31.755645  316262 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:27:31.755664  316262 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:27:31.755705  316262 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:27:31.783187  316262 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:27:31.783214  316262 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:27:31.783228  316262 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 20:27:31.783334  316262 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-820583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:27:31.783415  316262 ssh_runner.go:195] Run: crio config
	I1227 20:27:31.832660  316262 cni.go:84] Creating CNI manager for ""
	I1227 20:27:31.832685  316262 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:27:31.832703  316262 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:27:31.832733  316262 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-820583 NodeName:embed-certs-820583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:27:31.832891  316262 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-820583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:27:31.833026  316262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:27:31.841928  316262 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:27:31.841995  316262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:27:31.850358  316262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:27:31.864307  316262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:27:31.883613  316262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:27:31.897599  316262 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:27:31.901330  316262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:27:31.911430  316262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:27:31.992359  316262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:27:32.020850  316262 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583 for IP: 192.168.76.2
	I1227 20:27:32.020873  316262 certs.go:195] generating shared ca certs ...
	I1227 20:27:32.020894  316262 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:32.021068  316262 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:27:32.021137  316262 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:27:32.021152  316262 certs.go:257] generating profile certs ...
	I1227 20:27:32.021238  316262 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/client.key
	I1227 20:27:32.021265  316262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/client.crt with IP's: []
	I1227 20:27:32.131153  316262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/client.crt ...
	I1227 20:27:32.131186  316262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/client.crt: {Name:mkf78d95d491171ae52e34a485fe4c86cac4665a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:32.131392  316262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/client.key ...
	I1227 20:27:32.131409  316262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/client.key: {Name:mk0988c4d7dc08c795c01726890106555d7aab3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:32.131550  316262 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key.da959220
	I1227 20:27:32.131576  316262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.crt.da959220 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 20:27:32.248368  316262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.crt.da959220 ...
	I1227 20:27:32.248402  316262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.crt.da959220: {Name:mk38fd99f26fdcb14b906f00c796af7aa4dceabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:32.248639  316262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key.da959220 ...
	I1227 20:27:32.248661  316262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key.da959220: {Name:mkaa899ed2f260802436aeb05ba39e6dded8e7fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:32.248759  316262 certs.go:382] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.crt.da959220 -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.crt
	I1227 20:27:32.248827  316262 certs.go:386] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key.da959220 -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key
	I1227 20:27:32.248883  316262 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.key
	I1227 20:27:32.248897  316262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.crt with IP's: []
	I1227 20:27:32.313639  316262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.crt ...
	I1227 20:27:32.313664  316262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.crt: {Name:mkb2c8d607a1c05efd082e8a82830cd724783477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:32.313809  316262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.key ...
	I1227 20:27:32.313822  316262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.key: {Name:mk04b8ddf8e36268501111e2f0e61924c20ecec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:27:32.314007  316262 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:27:32.314043  316262 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:27:32.314054  316262 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:27:32.314081  316262 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:27:32.314105  316262 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:27:32.314129  316262 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:27:32.314169  316262 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:27:32.314773  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:27:32.332929  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:27:32.351582  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:27:32.370944  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:27:32.388193  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 20:27:32.406249  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:27:32.424108  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:27:32.441191  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:27:32.458225  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:27:32.477477  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:27:32.498163  316262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:27:32.516680  316262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:27:32.530373  316262 ssh_runner.go:195] Run: openssl version
	I1227 20:27:32.536577  316262 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:27:32.544137  316262 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:27:32.551558  316262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:27:32.555567  316262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:27:32.555614  316262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:27:32.591043  316262 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:27:32.598209  316262 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:27:32.605569  316262 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:27:32.613623  316262 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:27:32.621046  316262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:27:32.624699  316262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:27:32.624741  316262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:27:32.661342  316262 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:27:32.668607  316262 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14427.pem /etc/ssl/certs/51391683.0
	I1227 20:27:32.676061  316262 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:27:32.683220  316262 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:27:32.690508  316262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:27:32.693989  316262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:27:32.694040  316262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:27:32.732329  316262 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:27:32.740008  316262 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/144272.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:27:32.747563  316262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:27:32.751423  316262 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:27:32.751473  316262 kubeadm.go:401] StartCluster: {Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:27:32.751550  316262 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:27:32.751596  316262 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:27:32.783955  316262 cri.go:96] found id: ""
	I1227 20:27:32.784024  316262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:27:32.793476  316262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:27:32.801876  316262 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:27:32.801944  316262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:27:32.810335  316262 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:27:32.810365  316262 kubeadm.go:158] found existing configuration files:
	
	I1227 20:27:32.810407  316262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:27:32.817981  316262 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:27:32.818042  316262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:27:32.825289  316262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:27:32.832660  316262 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:27:32.832706  316262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:27:32.840474  316262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:27:32.848593  316262 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:27:32.848631  316262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:27:32.856276  316262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:27:32.864333  316262 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:27:32.864397  316262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:27:32.872075  316262 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:27:32.910929  316262 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:27:32.911011  316262 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:27:32.980268  316262 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:27:32.980352  316262 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 20:27:32.980429  316262 kubeadm.go:319] OS: Linux
	I1227 20:27:32.980527  316262 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:27:32.980597  316262 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:27:32.980697  316262 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:27:32.980798  316262 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:27:32.980878  316262 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:27:32.980971  316262 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:27:32.981051  316262 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:27:32.981688  316262 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 20:27:33.044954  316262 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:27:33.045169  316262 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:27:33.045326  316262 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:27:33.055995  316262 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:27:33.058496  316262 out.go:252]   - Generating certificates and keys ...
	I1227 20:27:33.058631  316262 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:27:33.058762  316262 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:27:33.141814  316262 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:27:33.176766  316262 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:27:33.207476  316262 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:27:33.286996  316262 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:27:33.408006  316262 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:27:33.408210  316262 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-820583 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:27:33.604422  316262 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:27:33.604612  316262 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-820583 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:27:33.651872  316262 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:27:33.827747  316262 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:27:33.880338  316262 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:27:33.880434  316262 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:27:33.923736  316262 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:27:34.040154  316262 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:27:34.200416  316262 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:27:34.322337  316262 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:27:34.503826  316262 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:27:34.505034  316262 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:27:34.509230  316262 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:27:34.511427  316262 out.go:252]   - Booting up control plane ...
	I1227 20:27:34.511575  316262 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:27:34.511704  316262 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:27:34.511822  316262 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:27:34.525679  316262 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:27:34.525820  316262 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:27:34.532850  316262 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:27:34.533288  316262 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:27:34.533365  316262 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:27:34.630441  316262 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:27:34.630587  316262 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:27:35.131797  316262 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.455276ms
	I1227 20:27:35.134694  316262 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:27:35.134841  316262 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 20:27:35.135016  316262 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:27:35.135137  316262 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:27:35.640661  316262 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.727799ms
	I1227 20:27:36.594667  316262 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.459831085s
	I1227 20:27:38.136609  316262 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001802576s
	I1227 20:27:38.152939  316262 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:27:38.161583  316262 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:27:38.170981  316262 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:27:38.171258  316262 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-820583 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:27:38.178143  316262 kubeadm.go:319] [bootstrap-token] Using token: u98ku9.bwax1edcf8yh6znt
	I1227 20:27:38.179274  316262 out.go:252]   - Configuring RBAC rules ...
	I1227 20:27:38.179427  316262 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:27:38.182045  316262 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:27:38.186328  316262 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:27:38.189248  316262 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:27:38.191444  316262 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:27:38.193528  316262 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:27:38.543085  316262 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:27:38.956002  316262 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:27:39.543089  316262 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:27:39.544233  316262 kubeadm.go:319] 
	I1227 20:27:39.544355  316262 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:27:39.544376  316262 kubeadm.go:319] 
	I1227 20:27:39.544471  316262 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:27:39.544482  316262 kubeadm.go:319] 
	I1227 20:27:39.544510  316262 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:27:39.544589  316262 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:27:39.544673  316262 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:27:39.544685  316262 kubeadm.go:319] 
	I1227 20:27:39.544759  316262 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:27:39.544768  316262 kubeadm.go:319] 
	I1227 20:27:39.544833  316262 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:27:39.544842  316262 kubeadm.go:319] 
	I1227 20:27:39.544928  316262 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:27:39.545029  316262 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:27:39.545113  316262 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:27:39.545121  316262 kubeadm.go:319] 
	I1227 20:27:39.545220  316262 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:27:39.545321  316262 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:27:39.545333  316262 kubeadm.go:319] 
	I1227 20:27:39.545459  316262 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token u98ku9.bwax1edcf8yh6znt \
	I1227 20:27:39.545636  316262 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 \
	I1227 20:27:39.545668  316262 kubeadm.go:319] 	--control-plane 
	I1227 20:27:39.545678  316262 kubeadm.go:319] 
	I1227 20:27:39.545806  316262 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:27:39.545816  316262 kubeadm.go:319] 
	I1227 20:27:39.545960  316262 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token u98ku9.bwax1edcf8yh6znt \
	I1227 20:27:39.546117  316262 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 
	I1227 20:27:39.548770  316262 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 20:27:39.548955  316262 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:27:39.548980  316262 cni.go:84] Creating CNI manager for ""
	I1227 20:27:39.548993  316262 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:27:39.552740  316262 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 27 20:27:27 no-preload-014435 crio[775]: time="2025-12-27T20:27:27.320255986Z" level=info msg="Started container" PID=2792 containerID=d7b08b8b5aea53d3320d483e0dbdb92522741c28ca289c85e86f3e8ac85a9a32 description=kube-system/storage-provisioner/storage-provisioner id=bdeab337-2161-4daf-b29f-6a59a19f2854 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d05e69ea985bb8fbd60e3da918b72591fa8ff77c03e567e4173313fb15342690
	Dec 27 20:27:27 no-preload-014435 crio[775]: time="2025-12-27T20:27:27.320423988Z" level=info msg="Started container" PID=2795 containerID=b4c8ede316284fdf62cdd5fd56a857bec9a67a7a2657ca00d4abb924aba83ac1 description=kube-system/coredns-7d764666f9-nvrq6/coredns id=e6880891-56dd-4a1e-b654-388b44769d16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8fce3f84834c181e31fb90152eee78648dacd1d1130122b3f58784d7446ee7ff
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.637847996Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1be04aad-3187-4e19-9cf1-9966e0fbcce2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.637961411Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.643716236Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4f25966d677c359f1ed64eb6d62227bc4fb96cad5e7676edbf11aea7deee1f03 UID:777b0dc8-69fb-44e6-85ed-eb73c72cfc69 NetNS:/var/run/netns/57db925b-059f-4fdb-b3a4-356030c50823 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c54c10}] Aliases:map[]}"
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.64374994Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.654483459Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4f25966d677c359f1ed64eb6d62227bc4fb96cad5e7676edbf11aea7deee1f03 UID:777b0dc8-69fb-44e6-85ed-eb73c72cfc69 NetNS:/var/run/netns/57db925b-059f-4fdb-b3a4-356030c50823 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c54c10}] Aliases:map[]}"
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.654663677Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.655727568Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.657011091Z" level=info msg="Ran pod sandbox 4f25966d677c359f1ed64eb6d62227bc4fb96cad5e7676edbf11aea7deee1f03 with infra container: default/busybox/POD" id=1be04aad-3187-4e19-9cf1-9966e0fbcce2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.658349236Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=db862a45-e967-416d-a571-f091182ad7af name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.658495657Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=db862a45-e967-416d-a571-f091182ad7af name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.658548588Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=db862a45-e967-416d-a571-f091182ad7af name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.659368531Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=43f5f9d2-57d4-4ab7-b8e5-66fe95ea0c6d name=/runtime.v1.ImageService/PullImage
	Dec 27 20:27:30 no-preload-014435 crio[775]: time="2025-12-27T20:27:30.661703128Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.306179602Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=43f5f9d2-57d4-4ab7-b8e5-66fe95ea0c6d name=/runtime.v1.ImageService/PullImage
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.306796119Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=518ef4f2-5610-47ef-b135-8e08c2763058 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.308466669Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=731beb22-198a-4b8b-8209-6cd4d112f9c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.311646809Z" level=info msg="Creating container: default/busybox/busybox" id=5a299bde-799c-488b-a83d-dd82d7b53833 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.311824457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.315339073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.315776273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.34511791Z" level=info msg="Created container 67316164db520666b25ddf18e450222d7a1c5d7fb88ed0cc43f2a12cd5e4eab3: default/busybox/busybox" id=5a299bde-799c-488b-a83d-dd82d7b53833 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.345595438Z" level=info msg="Starting container: 67316164db520666b25ddf18e450222d7a1c5d7fb88ed0cc43f2a12cd5e4eab3" id=26f2cddc-cc6f-4225-8112-dbf7cf0a2b3f name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:27:31 no-preload-014435 crio[775]: time="2025-12-27T20:27:31.347590483Z" level=info msg="Started container" PID=2870 containerID=67316164db520666b25ddf18e450222d7a1c5d7fb88ed0cc43f2a12cd5e4eab3 description=default/busybox/busybox id=26f2cddc-cc6f-4225-8112-dbf7cf0a2b3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f25966d677c359f1ed64eb6d62227bc4fb96cad5e7676edbf11aea7deee1f03
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	67316164db520       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   4f25966d677c3       busybox                                     default
	b4c8ede316284       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      13 seconds ago      Running             coredns                   0                   8fce3f84834c1       coredns-7d764666f9-nvrq6                    kube-system
	d7b08b8b5aea5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   d05e69ea985bb       storage-provisioner                         kube-system
	406d8cbe57f05       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    25 seconds ago      Running             kindnet-cni               0                   86d9ce8eb8f8c       kindnet-7pgwz                               kube-system
	4879a6c7e5887       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      27 seconds ago      Running             kube-proxy                0                   07d40317c90cb       kube-proxy-ctvzq                            kube-system
	ba106cc3eb17b       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      36 seconds ago      Running             kube-apiserver            0                   e1f4c27a01c79       kube-apiserver-no-preload-014435            kube-system
	ac88a4bec745b       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      36 seconds ago      Running             kube-controller-manager   0                   99d52189a127b       kube-controller-manager-no-preload-014435   kube-system
	504deecad20d7       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      36 seconds ago      Running             etcd                      0                   a13e55f50a6cb       etcd-no-preload-014435                      kube-system
	d955e84925c73       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      36 seconds ago      Running             kube-scheduler            0                   f22eb28bd963c       kube-scheduler-no-preload-014435            kube-system
	
	
	==> coredns [b4c8ede316284fdf62cdd5fd56a857bec9a67a7a2657ca00d4abb924aba83ac1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50810 - 61239 "HINFO IN 2548779351061264302.5722880830934606363. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06620242s
	
	
	==> describe nodes <==
	Name:               no-preload-014435
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-014435
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=no-preload-014435
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_27_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:27:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-014435
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:27:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:27:39 +0000   Sat, 27 Dec 2025 20:27:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:27:39 +0000   Sat, 27 Dec 2025 20:27:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:27:39 +0000   Sat, 27 Dec 2025 20:27:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:27:39 +0000   Sat, 27 Dec 2025 20:27:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-014435
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                16adf691-8e3a-4b05-b69e-6cb195641c2f
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-nvrq6                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-014435                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-7pgwz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-014435             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-014435    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-ctvzq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-014435             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node no-preload-014435 event: Registered Node no-preload-014435 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [504deecad20d7f78a33d72e3fbc6c3477c23162054abb2a3ffe0be55b4d7e289] <==
	{"level":"info","ts":"2025-12-27T20:27:04.661329Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-27T20:27:04.661677Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:27:06.785647Z","caller":"traceutil/trace.go:172","msg":"trace[493823113] transaction","detail":"{read_only:false; response_revision:111; number_of_response:1; }","duration":"129.028633ms","start":"2025-12-27T20:27:06.656594Z","end":"2025-12-27T20:27:06.785623Z","steps":["trace[493823113] 'process raft request'  (duration: 30.225884ms)","trace[493823113] 'compare'  (duration: 98.599446ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:27:06.961824Z","caller":"traceutil/trace.go:172","msg":"trace[1768732536] transaction","detail":"{read_only:false; response_revision:113; number_of_response:1; }","duration":"108.69173ms","start":"2025-12-27T20:27:06.853111Z","end":"2025-12-27T20:27:06.961803Z","steps":["trace[1768732536] 'process raft request'  (duration: 36.027727ms)","trace[1768732536] 'compare'  (duration: 72.53809ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:27:27.127766Z","caller":"traceutil/trace.go:172","msg":"trace[509830621] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"116.188854ms","start":"2025-12-27T20:27:27.011555Z","end":"2025-12-27T20:27:27.127744Z","steps":["trace[509830621] 'process raft request'  (duration: 116.045121ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:27.452283Z","caller":"traceutil/trace.go:172","msg":"trace[1591746804] linearizableReadLoop","detail":"{readStateIndex:427; appliedIndex:427; }","duration":"105.931899ms","start":"2025-12-27T20:27:27.346323Z","end":"2025-12-27T20:27:27.452255Z","steps":["trace[1591746804] 'read index received'  (duration: 105.922281ms)","trace[1591746804] 'applied index is now lower than readState.Index'  (duration: 8.38µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:27:27.452420Z","caller":"traceutil/trace.go:172","msg":"trace[909014598] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"131.992112ms","start":"2025-12-27T20:27:27.320411Z","end":"2025-12-27T20:27:27.452403Z","steps":["trace[909014598] 'process raft request'  (duration: 131.872617ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T20:27:27.452454Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.114495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:729"}
	{"level":"info","ts":"2025-12-27T20:27:27.452514Z","caller":"traceutil/trace.go:172","msg":"trace[948070045] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:411; }","duration":"106.19002ms","start":"2025-12-27T20:27:27.346313Z","end":"2025-12-27T20:27:27.452503Z","steps":["trace[948070045] 'agreement among raft nodes before linearized reading'  (duration: 106.018873ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:27.617701Z","caller":"traceutil/trace.go:172","msg":"trace[389226585] linearizableReadLoop","detail":"{readStateIndex:430; appliedIndex:430; }","duration":"121.544448ms","start":"2025-12-27T20:27:27.496127Z","end":"2025-12-27T20:27:27.617672Z","steps":["trace[389226585] 'read index received'  (duration: 121.533112ms)","trace[389226585] 'applied index is now lower than readState.Index'  (duration: 9.794µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T20:27:27.640539Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.383007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-27T20:27:27.640614Z","caller":"traceutil/trace.go:172","msg":"trace[879145196] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:414; }","duration":"144.478519ms","start":"2025-12-27T20:27:27.496118Z","end":"2025-12-27T20:27:27.640596Z","steps":["trace[879145196] 'agreement among raft nodes before linearized reading'  (duration: 121.621139ms)","trace[879145196] 'range keys from in-memory index tree'  (duration: 22.712592ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:27:27.640719Z","caller":"traceutil/trace.go:172","msg":"trace[1438339724] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"143.25686ms","start":"2025-12-27T20:27:27.497451Z","end":"2025-12-27T20:27:27.640708Z","steps":["trace[1438339724] 'process raft request'  (duration: 143.220626ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:27.640723Z","caller":"traceutil/trace.go:172","msg":"trace[930934498] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"149.989252ms","start":"2025-12-27T20:27:27.490713Z","end":"2025-12-27T20:27:27.640703Z","steps":["trace[930934498] 'process raft request'  (duration: 127.004682ms)","trace[930934498] 'compare'  (duration: 22.766977ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:27:27.766367Z","caller":"traceutil/trace.go:172","msg":"trace[1582934708] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:433; }","duration":"121.699327ms","start":"2025-12-27T20:27:27.644643Z","end":"2025-12-27T20:27:27.766343Z","steps":["trace[1582934708] 'read index received'  (duration: 121.688067ms)","trace[1582934708] 'applied index is now lower than readState.Index'  (duration: 9.228µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:27:27.778240Z","caller":"traceutil/trace.go:172","msg":"trace[1834253047] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"136.08152ms","start":"2025-12-27T20:27:27.642144Z","end":"2025-12-27T20:27:27.778225Z","steps":["trace[1834253047] 'process raft request'  (duration: 124.239187ms)","trace[1834253047] 'compare'  (duration: 11.754931ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T20:27:27.778347Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.675256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-27T20:27:27.778395Z","caller":"traceutil/trace.go:172","msg":"trace[669775840] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:416; }","duration":"133.747359ms","start":"2025-12-27T20:27:27.644635Z","end":"2025-12-27T20:27:27.778382Z","steps":["trace[669775840] 'agreement among raft nodes before linearized reading'  (duration: 121.760225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T20:27:27.808396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.49004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-27T20:27:27.808422Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.564182ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-27T20:27:27.808455Z","caller":"traceutil/trace.go:172","msg":"trace[951542366] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:417; }","duration":"110.562883ms","start":"2025-12-27T20:27:27.697880Z","end":"2025-12-27T20:27:27.808443Z","steps":["trace[951542366] 'agreement among raft nodes before linearized reading'  (duration: 110.450191ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:27.808469Z","caller":"traceutil/trace.go:172","msg":"trace[1355817706] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:417; }","duration":"126.62215ms","start":"2025-12-27T20:27:27.681836Z","end":"2025-12-27T20:27:27.808458Z","steps":["trace[1355817706] 'agreement among raft nodes before linearized reading'  (duration: 126.528319ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:27.808536Z","caller":"traceutil/trace.go:172","msg":"trace[1192588966] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"161.540883ms","start":"2025-12-27T20:27:27.646978Z","end":"2025-12-27T20:27:27.808519Z","steps":["trace[1192588966] 'process raft request'  (duration: 161.458344ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:27.808564Z","caller":"traceutil/trace.go:172","msg":"trace[25038834] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"164.456205ms","start":"2025-12-27T20:27:27.644090Z","end":"2025-12-27T20:27:27.808546Z","steps":["trace[25038834] 'process raft request'  (duration: 164.247982ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:27.986524Z","caller":"traceutil/trace.go:172","msg":"trace[404189088] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"169.535954ms","start":"2025-12-27T20:27:27.816960Z","end":"2025-12-27T20:27:27.986496Z","steps":["trace[404189088] 'process raft request'  (duration: 162.94128ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:27:41 up  1:10,  0 user,  load average: 3.25, 3.08, 2.15
	Linux no-preload-014435 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [406d8cbe57f058c2dab873107b3e1c6e59efa3df3be37a205eac519e87173284] <==
	I1227 20:27:15.708271       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:27:15.708574       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1227 20:27:15.708768       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:27:15.708795       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:27:15.708822       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:27:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:27:15.942057       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:27:15.942222       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:27:15.942256       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:27:15.942421       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:27:16.342380       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:27:16.342412       1 metrics.go:72] Registering metrics
	I1227 20:27:16.342476       1 controller.go:711] "Syncing nftables rules"
	I1227 20:27:25.944021       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:27:25.944101       1 main.go:301] handling current node
	I1227 20:27:35.945013       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:27:35.945067       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ba106cc3eb17b6d99cb8c36a44c9a8c5fbf5be925131c84eab57f077b494fdda] <==
	I1227 20:27:05.583775       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:27:05.583783       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:27:05.583707       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:27:05.584759       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:27:05.587132       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:27:05.592446       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:27:05.786038       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:27:06.487694       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:27:06.492660       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:27:06.492678       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:27:07.365081       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:27:07.397066       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:27:07.491825       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:27:07.497576       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1227 20:27:07.498583       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:27:07.502577       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:27:07.507643       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:27:08.429528       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:27:08.437852       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:27:08.444302       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:27:13.015547       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:27:13.212901       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1227 20:27:13.413567       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:27:13.419310       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1227 20:27:39.424394       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:37596: use of closed network connection
	
	
	==> kube-controller-manager [ac88a4bec745ba264e71cf808357c3a9e39ae5a2286eb6eeabbc0fe6d5b6331f] <==
	I1227 20:27:12.333433       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.333516       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.336762       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.336799       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.336807       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.336943       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.337082       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.337230       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.338731       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-014435" podCIDRs=["10.244.0.0/24"]
	I1227 20:27:12.339800       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.339981       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:27:12.340075       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-014435"
	I1227 20:27:12.340120       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 20:27:12.341198       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.341261       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.341366       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.341386       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.341504       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.341540       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.342971       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.432141       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.440237       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:12.440259       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:27:12.440266       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:27:27.341439       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [4879a6c7e5887e3760b3d7bcb9458b31a516e600a1bc5c46806c057fd8c629a8] <==
	I1227 20:27:13.742777       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:27:13.808047       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:27:13.908864       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:13.908964       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1227 20:27:13.909106       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:27:13.930360       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:27:13.930441       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:27:13.935741       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:27:13.936210       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:27:13.936231       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:27:13.937778       1 config.go:309] "Starting node config controller"
	I1227 20:27:13.937839       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:27:13.937880       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:27:13.937955       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:27:13.937980       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:27:13.938002       1 config.go:200] "Starting service config controller"
	I1227 20:27:13.938007       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:27:13.938013       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:27:13.938027       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:27:14.038184       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:27:14.038355       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:27:14.038374       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d955e84925c730d2db1b3bbf403510a956fc45509c5e870e5f06b648d6858946] <==
	E1227 20:27:05.553753       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:27:05.553774       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:27:05.553791       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:27:05.553840       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:27:05.553839       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:27:06.366787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1227 20:27:06.399744       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:27:06.477180       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:27:06.495227       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:27:06.703674       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:27:06.703677       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:27:06.733836       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:27:06.790392       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:27:06.805533       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:27:06.881214       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:27:06.907112       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:27:06.925977       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:27:06.946889       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:27:06.970818       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:27:06.988518       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:27:07.023409       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:27:07.035502       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:27:07.097031       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:27:07.142046       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	I1227 20:27:08.846442       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:27:13 no-preload-014435 kubelet[2206]: I1227 20:27:13.280097    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8db29263-ce40-4df9-9316-781104ff2dd5-xtables-lock\") pod \"kube-proxy-ctvzq\" (UID: \"8db29263-ce40-4df9-9316-781104ff2dd5\") " pod="kube-system/kube-proxy-ctvzq"
	Dec 27 20:27:13 no-preload-014435 kubelet[2206]: I1227 20:27:13.280125    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8db29263-ce40-4df9-9316-781104ff2dd5-lib-modules\") pod \"kube-proxy-ctvzq\" (UID: \"8db29263-ce40-4df9-9316-781104ff2dd5\") " pod="kube-system/kube-proxy-ctvzq"
	Dec 27 20:27:13 no-preload-014435 kubelet[2206]: I1227 20:27:13.280163    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a1f9fadf-b5dd-472d-bffe-f8a555aa44c9-cni-cfg\") pod \"kindnet-7pgwz\" (UID: \"a1f9fadf-b5dd-472d-bffe-f8a555aa44c9\") " pod="kube-system/kindnet-7pgwz"
	Dec 27 20:27:13 no-preload-014435 kubelet[2206]: I1227 20:27:13.280187    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwjp8\" (UniqueName: \"kubernetes.io/projected/a1f9fadf-b5dd-472d-bffe-f8a555aa44c9-kube-api-access-fwjp8\") pod \"kindnet-7pgwz\" (UID: \"a1f9fadf-b5dd-472d-bffe-f8a555aa44c9\") " pod="kube-system/kindnet-7pgwz"
	Dec 27 20:27:13 no-preload-014435 kubelet[2206]: I1227 20:27:13.280209    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kzms\" (UniqueName: \"kubernetes.io/projected/8db29263-ce40-4df9-9316-781104ff2dd5-kube-api-access-4kzms\") pod \"kube-proxy-ctvzq\" (UID: \"8db29263-ce40-4df9-9316-781104ff2dd5\") " pod="kube-system/kube-proxy-ctvzq"
	Dec 27 20:27:13 no-preload-014435 kubelet[2206]: I1227 20:27:13.280230    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1f9fadf-b5dd-472d-bffe-f8a555aa44c9-xtables-lock\") pod \"kindnet-7pgwz\" (UID: \"a1f9fadf-b5dd-472d-bffe-f8a555aa44c9\") " pod="kube-system/kindnet-7pgwz"
	Dec 27 20:27:13 no-preload-014435 kubelet[2206]: I1227 20:27:13.280250    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1f9fadf-b5dd-472d-bffe-f8a555aa44c9-lib-modules\") pod \"kindnet-7pgwz\" (UID: \"a1f9fadf-b5dd-472d-bffe-f8a555aa44c9\") " pod="kube-system/kindnet-7pgwz"
	Dec 27 20:27:14 no-preload-014435 kubelet[2206]: I1227 20:27:14.314254    2206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-ctvzq" podStartSLOduration=1.314237552 podStartE2EDuration="1.314237552s" podCreationTimestamp="2025-12-27 20:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:27:14.314110074 +0000 UTC m=+6.134816423" watchObservedRunningTime="2025-12-27 20:27:14.314237552 +0000 UTC m=+6.134943895"
	Dec 27 20:27:17 no-preload-014435 kubelet[2206]: E1227 20:27:17.538475    2206 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-014435" containerName="kube-scheduler"
	Dec 27 20:27:17 no-preload-014435 kubelet[2206]: I1227 20:27:17.548952    2206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-7pgwz" podStartSLOduration=2.649274843 podStartE2EDuration="4.548907534s" podCreationTimestamp="2025-12-27 20:27:13 +0000 UTC" firstStartedPulling="2025-12-27 20:27:13.567402307 +0000 UTC m=+5.388108635" lastFinishedPulling="2025-12-27 20:27:15.467034991 +0000 UTC m=+7.287741326" observedRunningTime="2025-12-27 20:27:16.319737666 +0000 UTC m=+8.140444031" watchObservedRunningTime="2025-12-27 20:27:17.548907534 +0000 UTC m=+9.369613877"
	Dec 27 20:27:19 no-preload-014435 kubelet[2206]: E1227 20:27:19.465595    2206 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-014435" containerName="kube-controller-manager"
	Dec 27 20:27:22 no-preload-014435 kubelet[2206]: E1227 20:27:22.858583    2206 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-014435" containerName="kube-apiserver"
	Dec 27 20:27:23 no-preload-014435 kubelet[2206]: E1227 20:27:23.259862    2206 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-014435" containerName="etcd"
	Dec 27 20:27:26 no-preload-014435 kubelet[2206]: I1227 20:27:26.401839    2206 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 20:27:26 no-preload-014435 kubelet[2206]: I1227 20:27:26.473607    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca55daec-8a25-48b5-ace0-eeb5441b6174-config-volume\") pod \"coredns-7d764666f9-nvrq6\" (UID: \"ca55daec-8a25-48b5-ace0-eeb5441b6174\") " pod="kube-system/coredns-7d764666f9-nvrq6"
	Dec 27 20:27:26 no-preload-014435 kubelet[2206]: I1227 20:27:26.473789    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4t6k\" (UniqueName: \"kubernetes.io/projected/dcd68309-2ed4-4177-b826-fe8649b75bbd-kube-api-access-j4t6k\") pod \"storage-provisioner\" (UID: \"dcd68309-2ed4-4177-b826-fe8649b75bbd\") " pod="kube-system/storage-provisioner"
	Dec 27 20:27:26 no-preload-014435 kubelet[2206]: I1227 20:27:26.474123    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr57j\" (UniqueName: \"kubernetes.io/projected/ca55daec-8a25-48b5-ace0-eeb5441b6174-kube-api-access-tr57j\") pod \"coredns-7d764666f9-nvrq6\" (UID: \"ca55daec-8a25-48b5-ace0-eeb5441b6174\") " pod="kube-system/coredns-7d764666f9-nvrq6"
	Dec 27 20:27:26 no-preload-014435 kubelet[2206]: I1227 20:27:26.474282    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dcd68309-2ed4-4177-b826-fe8649b75bbd-tmp\") pod \"storage-provisioner\" (UID: \"dcd68309-2ed4-4177-b826-fe8649b75bbd\") " pod="kube-system/storage-provisioner"
	Dec 27 20:27:27 no-preload-014435 kubelet[2206]: E1227 20:27:27.543957    2206 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-014435" containerName="kube-scheduler"
	Dec 27 20:27:28 no-preload-014435 kubelet[2206]: E1227 20:27:28.338186    2206 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvrq6" containerName="coredns"
	Dec 27 20:27:28 no-preload-014435 kubelet[2206]: I1227 20:27:28.365581    2206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-nvrq6" podStartSLOduration=15.36556196 podStartE2EDuration="15.36556196s" podCreationTimestamp="2025-12-27 20:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:27:28.353871646 +0000 UTC m=+20.174577989" watchObservedRunningTime="2025-12-27 20:27:28.36556196 +0000 UTC m=+20.186268303"
	Dec 27 20:27:28 no-preload-014435 kubelet[2206]: I1227 20:27:28.378814    2206 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.378794825 podStartE2EDuration="15.378794825s" podCreationTimestamp="2025-12-27 20:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:27:28.365460669 +0000 UTC m=+20.186167012" watchObservedRunningTime="2025-12-27 20:27:28.378794825 +0000 UTC m=+20.199501169"
	Dec 27 20:27:29 no-preload-014435 kubelet[2206]: E1227 20:27:29.340529    2206 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvrq6" containerName="coredns"
	Dec 27 20:27:30 no-preload-014435 kubelet[2206]: E1227 20:27:30.343121    2206 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvrq6" containerName="coredns"
	Dec 27 20:27:30 no-preload-014435 kubelet[2206]: I1227 20:27:30.402257    2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5f9\" (UniqueName: \"kubernetes.io/projected/777b0dc8-69fb-44e6-85ed-eb73c72cfc69-kube-api-access-lk5f9\") pod \"busybox\" (UID: \"777b0dc8-69fb-44e6-85ed-eb73c72cfc69\") " pod="default/busybox"
	
	
	==> storage-provisioner [d7b08b8b5aea53d3320d483e0dbdb92522741c28ca289c85e86f3e8ac85a9a32] <==
	I1227 20:27:27.335561       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:27:27.344636       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:27:27.344694       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:27:27.453646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:27.494715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:27:27.494866       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:27:27.495059       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-014435_b9080af9-c188-475a-9a33-fed507a060c5!
	I1227 20:27:27.495025       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f57cc67-39ae-4412-b3d6-f5e4088a0ea3", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-014435_b9080af9-c188-475a-9a33-fed507a060c5 became leader
	I1227 20:27:27.595613       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-014435_b9080af9-c188-475a-9a33-fed507a060c5!
	W1227 20:27:27.642092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:27.779577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:29.783373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:29.788147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:31.791400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:31.795694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:33.798487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:33.802383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:35.805428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:35.811226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:37.815030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:37.819947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:39.825433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:39.836903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-014435 -n no-preload-014435
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-014435 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (302.468596ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-820583 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-820583 describe deploy/metrics-server -n kube-system: exit status 1 (73.080408ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-820583 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-820583
helpers_test.go:244: (dbg) docker inspect embed-certs-820583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e",
	        "Created": "2025-12-27T20:27:28.471289119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 317858,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:27:28.505548185Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/hosts",
	        "LogPath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e-json.log",
	        "Name": "/embed-certs-820583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-820583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-820583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e",
	                "LowerDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-820583",
	                "Source": "/var/lib/docker/volumes/embed-certs-820583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-820583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-820583",
	                "name.minikube.sigs.k8s.io": "embed-certs-820583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6b7a66dd568ba2141dbe9f6a187b78529c4154a683e1f23193a054545d49288b",
	            "SandboxKey": "/var/run/docker/netns/6b7a66dd568b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-820583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "df613bfb14c3c19de8431bee4bfb1a435f82a062a92d1a7c32f9d573cfc5cc6e",
	                    "EndpointID": "56a327831232a28e1a1e9607ae22f1a997da2dd6f16e2b06819011729fe115f4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "d2:ca:be:ba:ba:9d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-820583",
	                        "fc43585f1b09"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-820583 -n embed-certs-820583
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-820583 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-820583 logs -n 25: (1.617448078s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-436655 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo docker system info                                                                                                                                                                                                      │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo containerd config dump                                                                                                                                                                                                  │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo crio config                                                                                                                                                                                                             │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ delete  │ -p bridge-436655                                                                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ delete  │ -p disable-driver-mounts-541137                                                                                                                                                                                                               │ disable-driver-mounts-541137 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ stop    │ -p no-preload-014435 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p no-preload-014435 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:27:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:27:58.391740  329454 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:27:58.392007  329454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:27:58.392019  329454 out.go:374] Setting ErrFile to fd 2...
	I1227 20:27:58.392024  329454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:27:58.392200  329454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:27:58.392635  329454 out.go:368] Setting JSON to false
	I1227 20:27:58.394116  329454 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4227,"bootTime":1766863051,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:27:58.394170  329454 start.go:143] virtualization: kvm guest
	I1227 20:27:58.396208  329454 out.go:179] * [no-preload-014435] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:27:58.397353  329454 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:27:58.397404  329454 notify.go:221] Checking for updates...
	I1227 20:27:58.402284  329454 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:27:58.403453  329454 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:27:58.404501  329454 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:27:58.405402  329454 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:27:58.406337  329454 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:27:58.407724  329454 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:27:58.408529  329454 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:27:58.444975  329454 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:27:58.445142  329454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:27:58.513961  329454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 20:27:58.502568044 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:27:58.514126  329454 docker.go:319] overlay module found
	I1227 20:27:58.515782  329454 out.go:179] * Using the docker driver based on existing profile
	I1227 20:27:58.517109  329454 start.go:309] selected driver: docker
	I1227 20:27:58.517125  329454 start.go:928] validating driver "docker" against &{Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:27:58.517223  329454 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:27:58.517823  329454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:27:58.595674  329454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 20:27:58.585875251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:27:58.595934  329454 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:27:58.595977  329454 cni.go:84] Creating CNI manager for ""
	I1227 20:27:58.596033  329454 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:27:58.596069  329454 start.go:353] cluster config:
	{Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:27:58.597961  329454 out.go:179] * Starting "no-preload-014435" primary control-plane node in "no-preload-014435" cluster
	I1227 20:27:58.599203  329454 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:27:58.600331  329454 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:27:58.601362  329454 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:27:58.601447  329454 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:27:58.601483  329454 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/config.json ...
	I1227 20:27:58.601638  329454 cache.go:107] acquiring lock: {Name:mk6e960fa523b2517ada6348a0c0342dcc4edad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601639  329454 cache.go:107] acquiring lock: {Name:mkc7c9b6d0e03c1b5aa41438b1790f395d1e5f80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601723  329454 cache.go:107] acquiring lock: {Name:mk823e851565ecb36a02ad5b6a0d4a7df2dfa5e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601716  329454 cache.go:107] acquiring lock: {Name:mk2782c5d3ecb08952ecec421a44319fef36b52f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601640  329454 cache.go:107] acquiring lock: {Name:mkbf8013e304cf72565565ec73d6e8c841102548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601778  329454 cache.go:107] acquiring lock: {Name:mkbccac0bb664dd93154dd51e6d66db53713b44e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601812  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 20:27:58.601823  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 20:27:58.601733  329454 cache.go:107] acquiring lock: {Name:mkd41fdff83db10f19a9aaf39c82eac8b62c593e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601853  329454 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 220.48µs
	I1227 20:27:58.601870  329454 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 20:27:58.601835  329454 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 218.034µs
	I1227 20:27:58.601879  329454 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 20:27:58.601838  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 20:27:58.601888  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 20:27:58.601895  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 20:27:58.601889  329454 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 263.989µs
	I1227 20:27:58.601900  329454 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 125.03µs
	I1227 20:27:58.601904  329454 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 184.71µs
	I1227 20:27:58.601946  329454 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 20:27:58.601908  329454 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 20:27:58.601934  329454 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 20:27:58.601899  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 20:27:58.601883  329454 cache.go:107] acquiring lock: {Name:mk73abfdc6ada091682c2dbf6848af1c08b22aba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601970  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1227 20:27:58.601995  329454 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 326.36µs
	I1227 20:27:58.602021  329454 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 20:27:58.601967  329454 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 240.147µs
	I1227 20:27:58.602049  329454 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 20:27:58.602035  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 20:27:58.602070  329454 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 234.532µs
	I1227 20:27:58.602085  329454 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 20:27:58.602097  329454 cache.go:87] Successfully saved all images to host disk.
	I1227 20:27:58.622465  329454 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:27:58.622485  329454 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:27:58.622519  329454 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:27:58.622560  329454 start.go:360] acquireMachinesLock for no-preload-014435: {Name:mk1127162727b27a4df39db89b47542aea8edc3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.622619  329454 start.go:364] duration metric: took 42.355µs to acquireMachinesLock for "no-preload-014435"
	I1227 20:27:58.622640  329454 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:27:58.622647  329454 fix.go:54] fixHost starting: 
	I1227 20:27:58.622858  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:27:58.641704  329454 fix.go:112] recreateIfNeeded on no-preload-014435: state=Stopped err=<nil>
	W1227 20:27:58.641761  329454 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 20:27:55.032619  316262 node_ready.go:57] node "embed-certs-820583" has "Ready":"False" status (will retry)
	W1227 20:27:57.531723  316262 node_ready.go:57] node "embed-certs-820583" has "Ready":"False" status (will retry)
	I1227 20:27:58.532321  316262 node_ready.go:49] node "embed-certs-820583" is "Ready"
	I1227 20:27:58.532356  316262 node_ready.go:38] duration metric: took 13.003722515s for node "embed-certs-820583" to be "Ready" ...
	I1227 20:27:58.532372  316262 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:27:58.532424  316262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:27:58.554172  316262 api_server.go:72] duration metric: took 13.348512181s to wait for apiserver process to appear ...
	I1227 20:27:58.554203  316262 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:27:58.554226  316262 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:27:58.560176  316262 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:27:58.561307  316262 api_server.go:141] control plane version: v1.35.0
	I1227 20:27:58.561335  316262 api_server.go:131] duration metric: took 7.125251ms to wait for apiserver health ...
	I1227 20:27:58.561346  316262 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:27:58.566974  316262 system_pods.go:59] 8 kube-system pods found
	I1227 20:27:58.567040  316262 system_pods.go:61] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:58.567061  316262 system_pods.go:61] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running
	I1227 20:27:58.567078  316262 system_pods.go:61] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:27:58.567094  316262 system_pods.go:61] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running
	I1227 20:27:58.567103  316262 system_pods.go:61] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running
	I1227 20:27:58.567108  316262 system_pods.go:61] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:27:58.567114  316262 system_pods.go:61] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running
	I1227 20:27:58.567121  316262 system_pods.go:61] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:58.567129  316262 system_pods.go:74] duration metric: took 5.775877ms to wait for pod list to return data ...
	I1227 20:27:58.567140  316262 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:27:58.569874  316262 default_sa.go:45] found service account: "default"
	I1227 20:27:58.569898  316262 default_sa.go:55] duration metric: took 2.751528ms for default service account to be created ...
	I1227 20:27:58.569908  316262 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:27:58.573400  316262 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:58.573438  316262 system_pods.go:89] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:58.573445  316262 system_pods.go:89] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running
	I1227 20:27:58.573456  316262 system_pods.go:89] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:27:58.573462  316262 system_pods.go:89] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running
	I1227 20:27:58.573467  316262 system_pods.go:89] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running
	I1227 20:27:58.573472  316262 system_pods.go:89] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:27:58.573477  316262 system_pods.go:89] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running
	I1227 20:27:58.573484  316262 system_pods.go:89] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:58.573522  316262 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 20:27:57.788443  323885 out.go:252]   - Configuring RBAC rules ...
	I1227 20:27:57.788614  323885 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:27:57.792026  323885 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:27:57.796986  323885 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:27:57.799162  323885 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:27:57.802156  323885 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:27:57.804507  323885 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:27:58.148365  323885 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:27:58.574860  323885 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:27:59.148834  323885 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:27:59.150175  323885 kubeadm.go:319] 
	I1227 20:27:59.150283  323885 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:27:59.150304  323885 kubeadm.go:319] 
	I1227 20:27:59.150412  323885 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:27:59.150425  323885 kubeadm.go:319] 
	I1227 20:27:59.150454  323885 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:27:59.150537  323885 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:27:59.150597  323885 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:27:59.150603  323885 kubeadm.go:319] 
	I1227 20:27:59.150657  323885 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:27:59.150663  323885 kubeadm.go:319] 
	I1227 20:27:59.150723  323885 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:27:59.150730  323885 kubeadm.go:319] 
	I1227 20:27:59.150788  323885 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:27:59.150879  323885 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:27:59.150994  323885 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:27:59.151006  323885 kubeadm.go:319] 
	I1227 20:27:59.151111  323885 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:27:59.151210  323885 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:27:59.151216  323885 kubeadm.go:319] 
	I1227 20:27:59.151329  323885 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 6lzbwu.7tkkguqf0vaa8htl \
	I1227 20:27:59.151453  323885 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 \
	I1227 20:27:59.151479  323885 kubeadm.go:319] 	--control-plane 
	I1227 20:27:59.151485  323885 kubeadm.go:319] 
	I1227 20:27:59.151589  323885 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:27:59.151596  323885 kubeadm.go:319] 
	I1227 20:27:59.151695  323885 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 6lzbwu.7tkkguqf0vaa8htl \
	I1227 20:27:59.151828  323885 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 
	I1227 20:27:59.157550  323885 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 20:27:59.157702  323885 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:27:59.157729  323885 cni.go:84] Creating CNI manager for ""
	I1227 20:27:59.157737  323885 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:27:59.160313  323885 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 20:27:56.008117  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:27:58.008191  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	I1227 20:27:59.161331  323885 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:27:59.167491  323885 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:27:59.167505  323885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:27:59.183495  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:27:59.399424  323885 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:27:59.399495  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:59.399529  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-954154 minikube.k8s.io/updated_at=2025_12_27T20_27_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=default-k8s-diff-port-954154 minikube.k8s.io/primary=true
	I1227 20:27:59.477810  323885 ops.go:34] apiserver oom_adj: -16
	I1227 20:27:59.478028  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:59.978740  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:58.792013  316262 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:58.792052  316262 system_pods.go:89] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:58.792061  316262 system_pods.go:89] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running
	I1227 20:27:58.792070  316262 system_pods.go:89] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:27:58.792083  316262 system_pods.go:89] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running
	I1227 20:27:58.792090  316262 system_pods.go:89] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running
	I1227 20:27:58.792094  316262 system_pods.go:89] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:27:58.792099  316262 system_pods.go:89] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running
	I1227 20:27:58.792116  316262 system_pods.go:89] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:59.131577  316262 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:59.131613  316262 system_pods.go:89] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Running
	I1227 20:27:59.131621  316262 system_pods.go:89] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running
	I1227 20:27:59.131626  316262 system_pods.go:89] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:27:59.131632  316262 system_pods.go:89] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running
	I1227 20:27:59.131638  316262 system_pods.go:89] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running
	I1227 20:27:59.131643  316262 system_pods.go:89] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:27:59.131648  316262 system_pods.go:89] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running
	I1227 20:27:59.131655  316262 system_pods.go:89] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Running
	I1227 20:27:59.131666  316262 system_pods.go:126] duration metric: took 561.725895ms to wait for k8s-apps to be running ...
	I1227 20:27:59.131680  316262 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:27:59.131727  316262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:27:59.146222  316262 system_svc.go:56] duration metric: took 14.527264ms WaitForService to wait for kubelet
	I1227 20:27:59.146264  316262 kubeadm.go:587] duration metric: took 13.94061076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:27:59.146302  316262 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:27:59.150048  316262 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:27:59.150079  316262 node_conditions.go:123] node cpu capacity is 8
	I1227 20:27:59.150097  316262 node_conditions.go:105] duration metric: took 3.789408ms to run NodePressure ...
	I1227 20:27:59.150114  316262 start.go:242] waiting for startup goroutines ...
	I1227 20:27:59.150124  316262 start.go:247] waiting for cluster config update ...
	I1227 20:27:59.150136  316262 start.go:256] writing updated cluster config ...
	I1227 20:27:59.150440  316262 ssh_runner.go:195] Run: rm -f paused
	I1227 20:27:59.156973  316262 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:27:59.231789  316262 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nvnjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.236379  316262 pod_ready.go:94] pod "coredns-7d764666f9-nvnjg" is "Ready"
	I1227 20:27:59.236407  316262 pod_ready.go:86] duration metric: took 4.58285ms for pod "coredns-7d764666f9-nvnjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.238737  316262 pod_ready.go:83] waiting for pod "etcd-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.242627  316262 pod_ready.go:94] pod "etcd-embed-certs-820583" is "Ready"
	I1227 20:27:59.242651  316262 pod_ready.go:86] duration metric: took 3.887766ms for pod "etcd-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.244479  316262 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.248307  316262 pod_ready.go:94] pod "kube-apiserver-embed-certs-820583" is "Ready"
	I1227 20:27:59.248323  316262 pod_ready.go:86] duration metric: took 3.793119ms for pod "kube-apiserver-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.250030  316262 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.561183  316262 pod_ready.go:94] pod "kube-controller-manager-embed-certs-820583" is "Ready"
	I1227 20:27:59.561213  316262 pod_ready.go:86] duration metric: took 311.164481ms for pod "kube-controller-manager-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.761209  316262 pod_ready.go:83] waiting for pod "kube-proxy-srwxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:00.161954  316262 pod_ready.go:94] pod "kube-proxy-srwxn" is "Ready"
	I1227 20:28:00.161982  316262 pod_ready.go:86] duration metric: took 400.748571ms for pod "kube-proxy-srwxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:00.362104  316262 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:00.761580  316262 pod_ready.go:94] pod "kube-scheduler-embed-certs-820583" is "Ready"
	I1227 20:28:00.761605  316262 pod_ready.go:86] duration metric: took 399.47952ms for pod "kube-scheduler-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:00.761616  316262 pod_ready.go:40] duration metric: took 1.604605321s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:00.804718  316262 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:28:00.806219  316262 out.go:179] * Done! kubectl is now configured to use "embed-certs-820583" cluster and "default" namespace by default
	I1227 20:27:58.643809  329454 out.go:252] * Restarting existing docker container for "no-preload-014435" ...
	I1227 20:27:58.643894  329454 cli_runner.go:164] Run: docker start no-preload-014435
	I1227 20:27:58.906753  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:27:58.924785  329454 kic.go:430] container "no-preload-014435" state is running.
	I1227 20:27:58.925214  329454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-014435
	I1227 20:27:58.943582  329454 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/config.json ...
	I1227 20:27:58.943804  329454 machine.go:94] provisionDockerMachine start ...
	I1227 20:27:58.943876  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:27:58.963268  329454 main.go:144] libmachine: Using SSH client type: native
	I1227 20:27:58.963489  329454 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1227 20:27:58.963502  329454 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:27:58.964148  329454 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38088->127.0.0.1:33113: read: connection reset by peer
	I1227 20:28:02.088773  329454 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-014435
	
	I1227 20:28:02.088808  329454 ubuntu.go:182] provisioning hostname "no-preload-014435"
	I1227 20:28:02.088879  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:02.106747  329454 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:02.107034  329454 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1227 20:28:02.107051  329454 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-014435 && echo "no-preload-014435" | sudo tee /etc/hostname
	I1227 20:28:02.237979  329454 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-014435
	
	I1227 20:28:02.238075  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:02.256942  329454 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:02.257149  329454 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1227 20:28:02.257166  329454 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-014435' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-014435/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-014435' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:02.379610  329454 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:02.379637  329454 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:02.379659  329454 ubuntu.go:190] setting up certificates
	I1227 20:28:02.379675  329454 provision.go:84] configureAuth start
	I1227 20:28:02.379723  329454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-014435
	I1227 20:28:02.399177  329454 provision.go:143] copyHostCerts
	I1227 20:28:02.399251  329454 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:02.399269  329454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:02.399362  329454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:02.399491  329454 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:02.399504  329454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:02.399543  329454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:02.399608  329454 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:02.399615  329454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:02.399652  329454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:02.399720  329454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.no-preload-014435 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-014435]
	I1227 20:28:02.506982  329454 provision.go:177] copyRemoteCerts
	I1227 20:28:02.507060  329454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:02.507106  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:02.526229  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:02.621315  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:28:02.639853  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:02.657543  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:28:02.674318  329454 provision.go:87] duration metric: took 294.620848ms to configureAuth
	I1227 20:28:02.674339  329454 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:02.674495  329454 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:02.674589  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:02.693247  329454 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:02.693478  329454 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1227 20:28:02.693496  329454 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:28:03.032856  329454 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:03.032880  329454 machine.go:97] duration metric: took 4.089060818s to provisionDockerMachine
	I1227 20:28:03.032895  329454 start.go:293] postStartSetup for "no-preload-014435" (driver="docker")
	I1227 20:28:03.032907  329454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:03.033019  329454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:03.033072  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:03.055364  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:03.147720  329454 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:03.151360  329454 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:03.151389  329454 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:03.151398  329454 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:03.151445  329454 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:03.151550  329454 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:03.151673  329454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:03.158902  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:03.175420  329454 start.go:296] duration metric: took 142.512685ms for postStartSetup
	I1227 20:28:03.175476  329454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:03.175508  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:03.193780  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:03.281945  329454 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:03.286276  329454 fix.go:56] duration metric: took 4.663623643s for fixHost
	I1227 20:28:03.286311  329454 start.go:83] releasing machines lock for "no-preload-014435", held for 4.663679303s
	I1227 20:28:03.286375  329454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-014435
	I1227 20:28:03.305955  329454 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:03.305981  329454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:03.306009  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:03.306056  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:03.324166  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:03.324721  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:00.479146  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:00.978036  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:01.479120  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:01.978076  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:02.478123  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:02.979106  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:03.478040  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:03.978717  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:04.073622  323885 kubeadm.go:1114] duration metric: took 4.674196665s to wait for elevateKubeSystemPrivileges
	I1227 20:28:04.073654  323885 kubeadm.go:403] duration metric: took 11.415089879s to StartCluster
	I1227 20:28:04.073675  323885 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:04.073735  323885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:04.077127  323885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:04.077491  323885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:28:04.077907  323885 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:04.078568  323885 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:04.078736  323885 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:28:04.078825  323885 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:04.078842  323885 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-954154"
	I1227 20:28:04.078862  323885 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:04.078892  323885 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-954154"
	I1227 20:28:04.079308  323885 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:04.078870  323885 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:04.079932  323885 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:04.080060  323885 out.go:179] * Verifying Kubernetes components...
	I1227 20:28:04.082150  323885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:04.113712  323885 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:28:03.413293  329454 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:03.477068  329454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:28:03.515886  329454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:03.521141  329454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:03.521191  329454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:03.529551  329454 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:28:03.529575  329454 start.go:496] detecting cgroup driver to use...
	I1227 20:28:03.529607  329454 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:03.529655  329454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:03.546287  329454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:03.558529  329454 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:03.558578  329454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:03.572326  329454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:03.584386  329454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:03.670995  329454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:03.769358  329454 docker.go:234] disabling docker service ...
	I1227 20:28:03.769412  329454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:03.784321  329454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:03.797777  329454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:03.891171  329454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:04.009325  329454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:04.028089  329454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:04.047996  329454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:04.048059  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.060036  329454 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:04.060150  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.071235  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.087432  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.107201  329454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:04.122524  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.143217  329454 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.158947  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.177157  329454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:04.191376  329454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:04.208640  329454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:04.356227  329454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:04.573538  329454 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:04.573633  329454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:04.580310  329454 start.go:574] Will wait 60s for crictl version
	I1227 20:28:04.580505  329454 ssh_runner.go:195] Run: which crictl
	I1227 20:28:04.585791  329454 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:04.625304  329454 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:04.625638  329454 ssh_runner.go:195] Run: crio --version
	I1227 20:28:04.670742  329454 ssh_runner.go:195] Run: crio --version
	I1227 20:28:04.715438  329454 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:04.113774  323885 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-954154"
	I1227 20:28:04.113819  323885 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:04.114317  323885 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:04.114940  323885 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:04.114960  323885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:28:04.115014  323885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:04.145561  323885 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:04.145559  323885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:04.145583  323885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:28:04.145640  323885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:04.180539  323885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:04.224052  323885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:28:04.293851  323885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:04.298619  323885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:04.331712  323885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:04.490813  323885 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1227 20:28:04.492602  323885 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-954154" to be "Ready" ...
	I1227 20:28:04.770837  323885 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1227 20:28:00.507726  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:02.508399  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:04.514656  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	I1227 20:28:04.773145  323885 addons.go:530] duration metric: took 694.410939ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 20:28:04.998166  323885 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-954154" context rescaled to 1 replicas
	I1227 20:28:04.717114  329454 cli_runner.go:164] Run: docker network inspect no-preload-014435 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:04.742364  329454 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:04.747450  329454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:04.761467  329454 kubeadm.go:884] updating cluster {Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:04.761649  329454 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:04.761703  329454 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:04.801231  329454 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:04.801257  329454 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:04.801266  329454 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 20:28:04.801395  329454 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-014435 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:04.801483  329454 ssh_runner.go:195] Run: crio config
	I1227 20:28:04.870893  329454 cni.go:84] Creating CNI manager for ""
	I1227 20:28:04.870930  329454 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:04.870948  329454 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:28:04.870979  329454 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-014435 NodeName:no-preload-014435 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:04.871151  329454 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-014435"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:04.871225  329454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:04.882031  329454 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:04.882093  329454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:04.891866  329454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 20:28:04.907282  329454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:04.922653  329454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1227 20:28:04.939231  329454 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:04.943933  329454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:04.956280  329454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:05.079343  329454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:05.113527  329454 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435 for IP: 192.168.94.2
	I1227 20:28:05.113551  329454 certs.go:195] generating shared ca certs ...
	I1227 20:28:05.113574  329454 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:05.113745  329454 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:05.113813  329454 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:05.113826  329454 certs.go:257] generating profile certs ...
	I1227 20:28:05.113978  329454 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/client.key
	I1227 20:28:05.114070  329454 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key.00c17d97
	I1227 20:28:05.114126  329454 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.key
	I1227 20:28:05.114270  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:05.114339  329454 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:05.114350  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:05.114381  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:05.114409  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:05.114437  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:05.114503  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:05.115253  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:05.141671  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:05.167267  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:05.191382  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:05.221277  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:28:05.247367  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:28:05.268782  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:05.289803  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:28:05.314755  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:05.337703  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:05.361403  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:05.383777  329454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:05.399582  329454 ssh_runner.go:195] Run: openssl version
	I1227 20:28:05.407644  329454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:05.417281  329454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:05.426700  329454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:05.431087  329454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:05.431148  329454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:05.492545  329454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:05.503787  329454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:05.514315  329454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:05.525106  329454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:05.530164  329454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:05.530223  329454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:05.590565  329454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:05.601554  329454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:05.613568  329454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:05.624387  329454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:05.630188  329454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:05.630264  329454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:05.690345  329454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:05.701622  329454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:05.706974  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:28:05.770449  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:28:05.835600  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:28:05.897669  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:28:05.959856  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:28:06.021309  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:28:06.082496  329454 kubeadm.go:401] StartCluster: {Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:06.082619  329454 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:06.082776  329454 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:06.130036  329454 cri.go:96] found id: "ccbaca54231342c2e05e7f6f39601d2b1cde71a9e688363aa99cd9f16b7b740b"
	I1227 20:28:06.130068  329454 cri.go:96] found id: "7e08e2ae41d9f45611fbcd90fd0c42176961f4b3b41557748ee9cae592b1a0c6"
	I1227 20:28:06.130074  329454 cri.go:96] found id: "a18c71b2140b36f980a1c87567071c8701b3e5f8aa4ab2f6fb52beb4656e9bd2"
	I1227 20:28:06.130079  329454 cri.go:96] found id: "455bcec8a175a162715bf84ffdfb0f031b741ff15312c1ed41956c3dafb97b6e"
	I1227 20:28:06.130083  329454 cri.go:96] found id: ""
	I1227 20:28:06.130126  329454 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:28:06.146202  329454 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:06Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:28:06.146265  329454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:06.156422  329454 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:28:06.156524  329454 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:28:06.156592  329454 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:28:06.166650  329454 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:28:06.167995  329454 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-014435" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:06.169027  329454 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-014435" cluster setting kubeconfig missing "no-preload-014435" context setting]
	I1227 20:28:06.170510  329454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:06.172930  329454 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:28:06.185259  329454 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1227 20:28:06.185294  329454 kubeadm.go:602] duration metric: took 28.756088ms to restartPrimaryControlPlane
	I1227 20:28:06.185306  329454 kubeadm.go:403] duration metric: took 102.822543ms to StartCluster
	I1227 20:28:06.185322  329454 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:06.185380  329454 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:06.187760  329454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:06.188033  329454 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:06.188181  329454 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:28:06.188287  329454 addons.go:70] Setting storage-provisioner=true in profile "no-preload-014435"
	I1227 20:28:06.188319  329454 addons.go:239] Setting addon storage-provisioner=true in "no-preload-014435"
	W1227 20:28:06.188332  329454 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:28:06.188367  329454 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:06.188421  329454 addons.go:70] Setting dashboard=true in profile "no-preload-014435"
	I1227 20:28:06.188442  329454 addons.go:239] Setting addon dashboard=true in "no-preload-014435"
	W1227 20:28:06.188450  329454 addons.go:248] addon dashboard should already be in state true
	I1227 20:28:06.188463  329454 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:28:06.188482  329454 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:28:06.188641  329454 addons.go:70] Setting default-storageclass=true in profile "no-preload-014435"
	I1227 20:28:06.188670  329454 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-014435"
	I1227 20:28:06.189012  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:28:06.189035  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:28:06.189125  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:28:06.193223  329454 out.go:179] * Verifying Kubernetes components...
	I1227 20:28:06.194649  329454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:06.220577  329454 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:28:06.220577  329454 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:28:06.221053  329454 addons.go:239] Setting addon default-storageclass=true in "no-preload-014435"
	W1227 20:28:06.221074  329454 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:28:06.221103  329454 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:28:06.221581  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:28:06.221887  329454 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:06.221905  329454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:28:06.221989  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:06.225563  329454 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:28:06.229144  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:28:06.229166  329454 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:28:06.229237  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:06.253516  329454 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:06.253555  329454 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:28:06.253616  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:06.253829  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:06.266670  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:06.289141  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:06.377764  329454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:06.378960  329454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:06.386166  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:28:06.386189  329454 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:28:06.395638  329454 node_ready.go:35] waiting up to 6m0s for node "no-preload-014435" to be "Ready" ...
	I1227 20:28:06.401524  329454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:06.404844  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:28:06.404865  329454 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:28:06.422207  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:28:06.422229  329454 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:28:06.441471  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:28:06.441495  329454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:28:06.462189  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:28:06.462214  329454 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:28:06.476955  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:28:06.476980  329454 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:28:06.490295  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:28:06.490321  329454 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:28:06.504701  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:28:06.504735  329454 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:28:06.519137  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:06.519161  329454 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:28:06.532459  329454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:07.790104  329454 node_ready.go:49] node "no-preload-014435" is "Ready"
	I1227 20:28:07.790143  329454 node_ready.go:38] duration metric: took 1.394439888s for node "no-preload-014435" to be "Ready" ...
	I1227 20:28:07.790163  329454 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:28:07.790220  329454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:28:08.456038  329454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.077018471s)
	I1227 20:28:08.456108  329454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.054533452s)
	I1227 20:28:08.456266  329454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.923758762s)
	I1227 20:28:08.456300  329454 api_server.go:72] duration metric: took 2.268211493s to wait for apiserver process to appear ...
	I1227 20:28:08.456315  329454 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:28:08.456359  329454 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 20:28:08.460388  329454 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-014435 addons enable metrics-server
	
	I1227 20:28:08.463375  329454 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:08.463402  329454 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:08.468895  329454 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Dec 27 20:27:58 embed-certs-820583 crio[773]: time="2025-12-27T20:27:58.495718799Z" level=info msg="Starting container: 23f0c9a0342cfa99040b371413a3b8a2afea695ebbfca8147c571be55696e194" id=5b2249e1-0a9d-43bb-ab24-00d32f1f0863 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:27:58 embed-certs-820583 crio[773]: time="2025-12-27T20:27:58.498027152Z" level=info msg="Started container" PID=1901 containerID=23f0c9a0342cfa99040b371413a3b8a2afea695ebbfca8147c571be55696e194 description=kube-system/coredns-7d764666f9-nvnjg/coredns id=5b2249e1-0a9d-43bb-ab24-00d32f1f0863 name=/runtime.v1.RuntimeService/StartContainer sandboxID=702301bfc396f33c2c56d5ebb8eb1e0c61b54fc4a1fdd0c4a72a8b6254422998
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.250510056Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1e20a2e2-bfe3-4381-b3a4-413d14dab6ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.250589928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.255192825Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:85f232950e6edeafbbdee54014f6359d41584c0bb3091cf3036976fbc0962f2e UID:cb192a66-d82e-4965-a6f8-046b0b6618d0 NetNS:/var/run/netns/2b974e6b-508e-401c-a555-849e24d620ef Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00059c5c0}] Aliases:map[]}"
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.255220719Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.264085361Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:85f232950e6edeafbbdee54014f6359d41584c0bb3091cf3036976fbc0962f2e UID:cb192a66-d82e-4965-a6f8-046b0b6618d0 NetNS:/var/run/netns/2b974e6b-508e-401c-a555-849e24d620ef Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00059c5c0}] Aliases:map[]}"
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.264216751Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.264868585Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.26558473Z" level=info msg="Ran pod sandbox 85f232950e6edeafbbdee54014f6359d41584c0bb3091cf3036976fbc0962f2e with infra container: default/busybox/POD" id=1e20a2e2-bfe3-4381-b3a4-413d14dab6ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.266722266Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=884ef68c-4feb-4187-b8cf-ec86c66f645b name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.26681501Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=884ef68c-4feb-4187-b8cf-ec86c66f645b name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.266841194Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=884ef68c-4feb-4187-b8cf-ec86c66f645b name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.267584847Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=25449268-63d0-489b-b05e-28a4ef4fb8cb name=/runtime.v1.ImageService/PullImage
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.269962746Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.857514062Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=25449268-63d0-489b-b05e-28a4ef4fb8cb name=/runtime.v1.ImageService/PullImage
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.858133006Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e3ae59fd-6a0f-42ef-b926-b8d4652d97ba name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.859857242Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e571ccb6-a4e7-4f3b-8207-531bc5d5a682 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.863005144Z" level=info msg="Creating container: default/busybox/busybox" id=fc84f7f5-e480-4655-85b4-50d5dfda40f6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.863111008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.866474538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.866858403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.895617918Z" level=info msg="Created container de13850d7c9f9430ca56144364e0fc77eb2b79ff11f1e1ba9deaccad8e0c8947: default/busybox/busybox" id=fc84f7f5-e480-4655-85b4-50d5dfda40f6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.896239246Z" level=info msg="Starting container: de13850d7c9f9430ca56144364e0fc77eb2b79ff11f1e1ba9deaccad8e0c8947" id=627e6eb3-79c1-4be6-91c4-9de75b706c6e name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:01 embed-certs-820583 crio[773]: time="2025-12-27T20:28:01.897785552Z" level=info msg="Started container" PID=1983 containerID=de13850d7c9f9430ca56144364e0fc77eb2b79ff11f1e1ba9deaccad8e0c8947 description=default/busybox/busybox id=627e6eb3-79c1-4be6-91c4-9de75b706c6e name=/runtime.v1.RuntimeService/StartContainer sandboxID=85f232950e6edeafbbdee54014f6359d41584c0bb3091cf3036976fbc0962f2e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	de13850d7c9f9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   85f232950e6ed       busybox                                      default
	23f0c9a0342cf       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   702301bfc396f       coredns-7d764666f9-nvnjg                     kube-system
	215536a169fbe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   75a7cd8b8c877       storage-provisioner                          kube-system
	e6ab816da6c2b       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   e8e67e907900c       kindnet-6d59t                                kube-system
	691e7b4303b37       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      24 seconds ago      Running             kube-proxy                0                   54c96f5768f5e       kube-proxy-srwxn                             kube-system
	3c86f9400991a       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      34 seconds ago      Running             kube-controller-manager   0                   8c427f8289bf0       kube-controller-manager-embed-certs-820583   kube-system
	75e8e1ff5a19c       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      34 seconds ago      Running             kube-scheduler            0                   7a45bbac3d477       kube-scheduler-embed-certs-820583            kube-system
	80e62af71c5a2       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      34 seconds ago      Running             kube-apiserver            0                   e5e0c21a6ec6c       kube-apiserver-embed-certs-820583            kube-system
	1540d0f23d8f3       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      34 seconds ago      Running             etcd                      0                   62aeb89a72331       etcd-embed-certs-820583                      kube-system
	
	
	==> coredns [23f0c9a0342cfa99040b371413a3b8a2afea695ebbfca8147c571be55696e194] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59784 - 19965 "HINFO IN 5989561658084791751.7170585408206652570. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.455556591s
	
	
	==> describe nodes <==
	Name:               embed-certs-820583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-820583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=embed-certs-820583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_27_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:27:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-820583
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:28:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:28:09 +0000   Sat, 27 Dec 2025 20:27:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:28:09 +0000   Sat, 27 Dec 2025 20:27:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:28:09 +0000   Sat, 27 Dec 2025 20:27:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:28:09 +0000   Sat, 27 Dec 2025 20:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-820583
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                41c5c9fb-06be-4108-9630-9ada526cc117
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-nvnjg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-820583                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-6d59t                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-820583             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-820583    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-srwxn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-820583             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node embed-certs-820583 event: Registered Node embed-certs-820583 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [1540d0f23d8f3bd9d26a3463baa97d6fd40b5d1679e58104cb565557c47323c0] <==
	{"level":"warn","ts":"2025-12-27T20:27:44.880768Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.614098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:1 size:3788"}
	{"level":"info","ts":"2025-12-27T20:27:44.880826Z","caller":"traceutil/trace.go:172","msg":"trace[790856460] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:1; response_revision:316; }","duration":"191.679602ms","start":"2025-12-27T20:27:44.689129Z","end":"2025-12-27T20:27:44.880809Z","steps":["trace[790856460] 'agreement among raft nodes before linearized reading'  (duration: 136.559804ms)","trace[790856460] 'range keys from in-memory index tree'  (duration: 55.010821ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:27:44.880846Z","caller":"traceutil/trace.go:172","msg":"trace[425986803] transaction","detail":"{read_only:false; response_revision:317; number_of_response:1; }","duration":"192.720085ms","start":"2025-12-27T20:27:44.688112Z","end":"2025-12-27T20:27:44.880832Z","steps":["trace[425986803] 'process raft request'  (duration: 137.49184ms)","trace[425986803] 'compare'  (duration: 55.115745ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T20:27:44.882168Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.341065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-12-27T20:27:44.882213Z","caller":"traceutil/trace.go:172","msg":"trace[1811264371] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:317; }","duration":"192.394204ms","start":"2025-12-27T20:27:44.689809Z","end":"2025-12-27T20:27:44.882203Z","steps":["trace[1811264371] 'agreement among raft nodes before linearized reading'  (duration: 192.250114ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:44.882428Z","caller":"traceutil/trace.go:172","msg":"trace[544364815] transaction","detail":"{read_only:false; response_revision:318; number_of_response:1; }","duration":"194.283534ms","start":"2025-12-27T20:27:44.688135Z","end":"2025-12-27T20:27:44.882418Z","steps":["trace[544364815] 'process raft request'  (duration: 194.123048ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:44.882556Z","caller":"traceutil/trace.go:172","msg":"trace[840387324] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"190.111467ms","start":"2025-12-27T20:27:44.692427Z","end":"2025-12-27T20:27:44.882539Z","steps":["trace[840387324] 'process raft request'  (duration: 190.07636ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:44.882585Z","caller":"traceutil/trace.go:172","msg":"trace[2080933788] transaction","detail":"{read_only:false; response_revision:322; number_of_response:1; }","duration":"192.266713ms","start":"2025-12-27T20:27:44.690312Z","end":"2025-12-27T20:27:44.882579Z","steps":["trace[2080933788] 'process raft request'  (duration: 192.122575ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:44.882610Z","caller":"traceutil/trace.go:172","msg":"trace[980757940] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"191.87085ms","start":"2025-12-27T20:27:44.690730Z","end":"2025-12-27T20:27:44.882601Z","steps":["trace[980757940] 'process raft request'  (duration: 191.742489ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:44.882682Z","caller":"traceutil/trace.go:172","msg":"trace[7689648] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"193.695794ms","start":"2025-12-27T20:27:44.688974Z","end":"2025-12-27T20:27:44.882670Z","steps":["trace[7689648] 'process raft request'  (duration: 193.355302ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:44.882719Z","caller":"traceutil/trace.go:172","msg":"trace[710860592] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"193.22578ms","start":"2025-12-27T20:27:44.689485Z","end":"2025-12-27T20:27:44.882711Z","steps":["trace[710860592] 'process raft request'  (duration: 192.914742ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:44.882719Z","caller":"traceutil/trace.go:172","msg":"trace[1265415788] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"193.661379ms","start":"2025-12-27T20:27:44.689051Z","end":"2025-12-27T20:27:44.882713Z","steps":["trace[1265415788] 'process raft request'  (duration: 193.309863ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:45.051866Z","caller":"traceutil/trace.go:172","msg":"trace[53396176] linearizableReadLoop","detail":"{readStateIndex:350; appliedIndex:350; }","duration":"134.796293ms","start":"2025-12-27T20:27:44.917044Z","end":"2025-12-27T20:27:45.051841Z","steps":["trace[53396176] 'read index received'  (duration: 134.78605ms)","trace[53396176] 'applied index is now lower than readState.Index'  (duration: 9.086µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T20:27:45.188978Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"271.907359ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:4657"}
	{"level":"info","ts":"2025-12-27T20:27:45.189047Z","caller":"traceutil/trace.go:172","msg":"trace[1947922553] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:339; }","duration":"271.994111ms","start":"2025-12-27T20:27:44.917036Z","end":"2025-12-27T20:27:45.189030Z","steps":["trace[1947922553] 'agreement among raft nodes before linearized reading'  (duration: 134.89705ms)","trace[1947922553] 'range keys from in-memory index tree'  (duration: 136.942406ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T20:27:45.189066Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.114768ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357599131290755 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-srwxn.18852c6a89ee3959\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-srwxn.18852c6a89ee3959\" value_size:617 lease:6414985562276514642 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-27T20:27:45.189163Z","caller":"traceutil/trace.go:172","msg":"trace[1139759895] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"275.392563ms","start":"2025-12-27T20:27:44.913748Z","end":"2025-12-27T20:27:45.189141Z","steps":["trace[1139759895] 'process raft request'  (duration: 138.15803ms)","trace[1139759895] 'compare'  (duration: 136.991201ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:27:45.189243Z","caller":"traceutil/trace.go:172","msg":"trace[979362250] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:350; }","duration":"137.274018ms","start":"2025-12-27T20:27:45.051953Z","end":"2025-12-27T20:27:45.189227Z","steps":["trace[979362250] 'read index received'  (duration: 136.931546ms)","trace[979362250] 'applied index is now lower than readState.Index'  (duration: 341.47µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T20:27:45.189285Z","caller":"traceutil/trace.go:172","msg":"trace[1844409570] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"274.525083ms","start":"2025-12-27T20:27:44.914747Z","end":"2025-12-27T20:27:45.189272Z","steps":["trace[1844409570] 'process raft request'  (duration: 274.385865ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:45.189339Z","caller":"traceutil/trace.go:172","msg":"trace[764721978] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"274.178533ms","start":"2025-12-27T20:27:44.915148Z","end":"2025-12-27T20:27:45.189326Z","steps":["trace[764721978] 'process raft request'  (duration: 274.050532ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:45.189462Z","caller":"traceutil/trace.go:172","msg":"trace[1763578568] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"274.302306ms","start":"2025-12-27T20:27:44.915149Z","end":"2025-12-27T20:27:45.189452Z","steps":["trace[1763578568] 'process raft request'  (duration: 274.081991ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T20:27:45.189484Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.095764ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-12-27T20:27:45.189519Z","caller":"traceutil/trace.go:172","msg":"trace[138101294] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:343; }","duration":"187.14581ms","start":"2025-12-27T20:27:45.002363Z","end":"2025-12-27T20:27:45.189509Z","steps":["trace[138101294] 'agreement among raft nodes before linearized reading'  (duration: 186.962322ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:45.189576Z","caller":"traceutil/trace.go:172","msg":"trace[1204495229] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"272.703339ms","start":"2025-12-27T20:27:44.916863Z","end":"2025-12-27T20:27:45.189566Z","steps":["trace[1204495229] 'process raft request'  (duration: 272.524716ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:27:45.189631Z","caller":"traceutil/trace.go:172","msg":"trace[1741895484] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"267.684447ms","start":"2025-12-27T20:27:44.921935Z","end":"2025-12-27T20:27:45.189619Z","steps":["trace[1741895484] 'process raft request'  (duration: 267.525737ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:28:10 up  1:10,  0 user,  load average: 3.64, 3.17, 2.21
	Linux embed-certs-820583 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e6ab816da6c2b0c1eb753a0c3239da89d980ad7705ce1559f64486adb8287e62] <==
	I1227 20:27:47.424867       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:27:47.425166       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:27:47.425310       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:27:47.425334       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:27:47.425357       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:27:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:27:47.722128       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:27:47.722156       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:27:47.722173       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:27:47.722342       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:27:48.022774       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:27:48.022799       1 metrics.go:72] Registering metrics
	I1227 20:27:48.022847       1 controller.go:711] "Syncing nftables rules"
	I1227 20:27:57.722219       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:27:57.722305       1 main.go:301] handling current node
	I1227 20:28:07.725374       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:28:07.725417       1 main.go:301] handling current node
	
	
	==> kube-apiserver [80e62af71c5a21ac21276c56f356a95b2ec7ef781c75e300f2af22ece2b832d3] <==
	E1227 20:27:36.642671       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1227 20:27:36.656624       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:27:36.656754       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1227 20:27:36.668575       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:27:36.669132       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:27:36.675813       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:27:36.845596       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:27:37.535764       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:27:37.540018       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:27:37.540038       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:27:37.990825       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:27:38.034155       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:27:38.141402       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:27:38.147294       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 20:27:38.148330       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:27:38.152376       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:27:38.583620       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:27:38.947607       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:27:38.955192       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:27:38.962252       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:27:44.515328       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:27:44.520510       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:27:44.680876       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:27:44.688652       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1227 20:28:08.069315       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37410: use of closed network connection
	
	
	==> kube-controller-manager [3c86f9400991ac205ebc1b305d6c3f12327ea1588b20d5e112dda611381550e9] <==
	I1227 20:27:43.389232       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.388456       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389435       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389446       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389396       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389528       1 range_allocator.go:177] "Sending events to api server"
	I1227 20:27:43.389597       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:27:43.389604       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:27:43.389608       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389628       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389614       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389605       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389630       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389852       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.389890       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.390330       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.390390       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.394152       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:27:43.400547       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.579811       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-820583" podCIDRs=["10.244.0.0/24"]
	I1227 20:27:43.589396       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:43.589412       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:27:43.589416       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:27:43.594743       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:58.386944       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [691e7b4303b371a2ba00a9fda16eebccbe7ec616fa6430e8bfa00812cdb240bc] <==
	I1227 20:27:45.497727       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:27:45.576518       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:27:45.676648       1 shared_informer.go:377] "Caches are synced"
	I1227 20:27:45.676723       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:27:45.676869       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:27:45.697993       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:27:45.698096       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:27:45.703204       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:27:45.704205       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:27:45.704985       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:27:45.707598       1 config.go:200] "Starting service config controller"
	I1227 20:27:45.707614       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:27:45.707639       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:27:45.707644       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:27:45.707655       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:27:45.707660       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:27:45.707774       1 config.go:309] "Starting node config controller"
	I1227 20:27:45.707800       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:27:45.707810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:27:45.808119       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:27:45.808163       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:27:45.808471       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [75e8e1ff5a19c24984d4252044455f27feb524a162058d3b3283c5d8ae0a37f0] <==
	E1227 20:27:36.593711       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:27:36.593770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:27:36.593831       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:27:36.593849       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:27:36.593862       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:27:36.594063       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:27:36.594158       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:27:36.594224       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:27:36.594251       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:27:36.594224       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:27:36.594349       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:27:36.594368       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:27:37.402714       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:27:37.432140       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:27:37.450515       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:27:37.455503       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:27:37.501006       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:27:37.517541       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:27:37.606596       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:27:37.710855       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:27:37.777977       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1227 20:27:37.780887       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:27:37.814495       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:27:37.817761       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1227 20:27:40.088753       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:27:45 embed-certs-820583 kubelet[1305]: I1227 20:27:45.019493    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbdnl\" (UniqueName: \"kubernetes.io/projected/8d08af7a-1a92-4a9d-b68e-c816e37f2d26-kube-api-access-dbdnl\") pod \"kube-proxy-srwxn\" (UID: \"8d08af7a-1a92-4a9d-b68e-c816e37f2d26\") " pod="kube-system/kube-proxy-srwxn"
	Dec 27 20:27:45 embed-certs-820583 kubelet[1305]: I1227 20:27:45.120058    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b85db12-4b05-4f39-af95-5c3a6aa7c0ad-xtables-lock\") pod \"kindnet-6d59t\" (UID: \"4b85db12-4b05-4f39-af95-5c3a6aa7c0ad\") " pod="kube-system/kindnet-6d59t"
	Dec 27 20:27:45 embed-certs-820583 kubelet[1305]: I1227 20:27:45.120116    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqww\" (UniqueName: \"kubernetes.io/projected/4b85db12-4b05-4f39-af95-5c3a6aa7c0ad-kube-api-access-5cqww\") pod \"kindnet-6d59t\" (UID: \"4b85db12-4b05-4f39-af95-5c3a6aa7c0ad\") " pod="kube-system/kindnet-6d59t"
	Dec 27 20:27:45 embed-certs-820583 kubelet[1305]: I1227 20:27:45.120180    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4b85db12-4b05-4f39-af95-5c3a6aa7c0ad-cni-cfg\") pod \"kindnet-6d59t\" (UID: \"4b85db12-4b05-4f39-af95-5c3a6aa7c0ad\") " pod="kube-system/kindnet-6d59t"
	Dec 27 20:27:45 embed-certs-820583 kubelet[1305]: I1227 20:27:45.120201    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b85db12-4b05-4f39-af95-5c3a6aa7c0ad-lib-modules\") pod \"kindnet-6d59t\" (UID: \"4b85db12-4b05-4f39-af95-5c3a6aa7c0ad\") " pod="kube-system/kindnet-6d59t"
	Dec 27 20:27:46 embed-certs-820583 kubelet[1305]: E1227 20:27:46.957081    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-820583" containerName="kube-controller-manager"
	Dec 27 20:27:46 embed-certs-820583 kubelet[1305]: I1227 20:27:46.998367    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-srwxn" podStartSLOduration=2.9983473800000002 podStartE2EDuration="2.99834738s" podCreationTimestamp="2025-12-27 20:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:27:45.832856264 +0000 UTC m=+7.125033817" watchObservedRunningTime="2025-12-27 20:27:46.99834738 +0000 UTC m=+8.290524932"
	Dec 27 20:27:47 embed-certs-820583 kubelet[1305]: E1227 20:27:47.370337    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-820583" containerName="kube-apiserver"
	Dec 27 20:27:47 embed-certs-820583 kubelet[1305]: E1227 20:27:47.609398    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-820583" containerName="kube-scheduler"
	Dec 27 20:27:47 embed-certs-820583 kubelet[1305]: I1227 20:27:47.838265    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-6d59t" podStartSLOduration=2.057923546 podStartE2EDuration="3.838241892s" podCreationTimestamp="2025-12-27 20:27:44 +0000 UTC" firstStartedPulling="2025-12-27 20:27:45.407547798 +0000 UTC m=+6.699725350" lastFinishedPulling="2025-12-27 20:27:47.187866149 +0000 UTC m=+8.480043696" observedRunningTime="2025-12-27 20:27:47.83803482 +0000 UTC m=+9.130212383" watchObservedRunningTime="2025-12-27 20:27:47.838241892 +0000 UTC m=+9.130419444"
	Dec 27 20:27:52 embed-certs-820583 kubelet[1305]: E1227 20:27:52.194907    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-820583" containerName="etcd"
	Dec 27 20:27:56 embed-certs-820583 kubelet[1305]: E1227 20:27:56.962460    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-820583" containerName="kube-controller-manager"
	Dec 27 20:27:57 embed-certs-820583 kubelet[1305]: E1227 20:27:57.376680    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-820583" containerName="kube-apiserver"
	Dec 27 20:27:57 embed-certs-820583 kubelet[1305]: E1227 20:27:57.614293    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-820583" containerName="kube-scheduler"
	Dec 27 20:27:58 embed-certs-820583 kubelet[1305]: I1227 20:27:58.101293    1305 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 20:27:58 embed-certs-820583 kubelet[1305]: I1227 20:27:58.217341    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h497d\" (UniqueName: \"kubernetes.io/projected/43ffce66-ea7f-41f4-aa47-ce8860d08b61-kube-api-access-h497d\") pod \"coredns-7d764666f9-nvnjg\" (UID: \"43ffce66-ea7f-41f4-aa47-ce8860d08b61\") " pod="kube-system/coredns-7d764666f9-nvnjg"
	Dec 27 20:27:58 embed-certs-820583 kubelet[1305]: I1227 20:27:58.217496    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c02473c8-cc31-4a36-8823-cea2e486cdba-tmp\") pod \"storage-provisioner\" (UID: \"c02473c8-cc31-4a36-8823-cea2e486cdba\") " pod="kube-system/storage-provisioner"
	Dec 27 20:27:58 embed-certs-820583 kubelet[1305]: I1227 20:27:58.217578    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43ffce66-ea7f-41f4-aa47-ce8860d08b61-config-volume\") pod \"coredns-7d764666f9-nvnjg\" (UID: \"43ffce66-ea7f-41f4-aa47-ce8860d08b61\") " pod="kube-system/coredns-7d764666f9-nvnjg"
	Dec 27 20:27:58 embed-certs-820583 kubelet[1305]: I1227 20:27:58.217627    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wrd\" (UniqueName: \"kubernetes.io/projected/c02473c8-cc31-4a36-8823-cea2e486cdba-kube-api-access-l2wrd\") pod \"storage-provisioner\" (UID: \"c02473c8-cc31-4a36-8823-cea2e486cdba\") " pod="kube-system/storage-provisioner"
	Dec 27 20:27:58 embed-certs-820583 kubelet[1305]: E1227 20:27:58.850768    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvnjg" containerName="coredns"
	Dec 27 20:27:58 embed-certs-820583 kubelet[1305]: I1227 20:27:58.873004    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-nvnjg" podStartSLOduration=14.872983918 podStartE2EDuration="14.872983918s" podCreationTimestamp="2025-12-27 20:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:27:58.863874778 +0000 UTC m=+20.156052329" watchObservedRunningTime="2025-12-27 20:27:58.872983918 +0000 UTC m=+20.165161469"
	Dec 27 20:27:58 embed-certs-820583 kubelet[1305]: I1227 20:27:58.873299    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.873279903 podStartE2EDuration="13.873279903s" podCreationTimestamp="2025-12-27 20:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:27:58.872773217 +0000 UTC m=+20.164950769" watchObservedRunningTime="2025-12-27 20:27:58.873279903 +0000 UTC m=+20.165457454"
	Dec 27 20:27:59 embed-certs-820583 kubelet[1305]: E1227 20:27:59.855824    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvnjg" containerName="coredns"
	Dec 27 20:28:00 embed-certs-820583 kubelet[1305]: E1227 20:28:00.858185    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvnjg" containerName="coredns"
	Dec 27 20:28:01 embed-certs-820583 kubelet[1305]: I1227 20:28:01.033992    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7m9h\" (UniqueName: \"kubernetes.io/projected/cb192a66-d82e-4965-a6f8-046b0b6618d0-kube-api-access-h7m9h\") pod \"busybox\" (UID: \"cb192a66-d82e-4965-a6f8-046b0b6618d0\") " pod="default/busybox"
	
	
	==> storage-provisioner [215536a169fbed9c7c2c8eb8e73c045d732cc7ebd1f502a450a85863e8d9baac] <==
	I1227 20:27:58.508723       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:27:58.518545       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:27:58.518596       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:27:58.521246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:58.527006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:27:58.527198       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:27:58.527756       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6894b207-1c50-480d-809b-b77065e433a4", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-820583_5d30fabe-cbac-44f9-be3f-da7aa1cd803b became leader
	I1227 20:27:58.527970       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-820583_5d30fabe-cbac-44f9-be3f-da7aa1cd803b!
	W1227 20:27:58.535612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:27:58.544758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:27:58.628371       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-820583_5d30fabe-cbac-44f9-be3f-da7aa1cd803b!
	W1227 20:28:00.548245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:00.552101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:02.555873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:02.560982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:04.565769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:04.571083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:06.575261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:06.580132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:08.587511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:08.597353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-820583 -n embed-certs-820583
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-820583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-954154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-954154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (242.208019ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-954154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-954154 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-954154 describe deploy/metrics-server -n kube-system: exit status 1 (59.765163ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-954154 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-954154
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-954154:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987",
	        "Created": "2025-12-27T20:27:45.398813644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 326100,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:27:45.442261595Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/hostname",
	        "HostsPath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/hosts",
	        "LogPath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987-json.log",
	        "Name": "/default-k8s-diff-port-954154",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-954154:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-954154",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987",
	                "LowerDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-954154",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-954154/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-954154",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-954154",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-954154",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d0c5c70977c7300f30814157d90db62bd8fd221c73a6c336ce0ac57a66a4e343",
	            "SandboxKey": "/var/run/docker/netns/d0c5c70977c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-954154": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb8ec9ff71cd755e87cbf3d8e42ebf773088a83f754b577a011fbcdb7983e0c",
	                    "EndpointID": "c4239138e920cb75333fe7b45056e091c06a746083d9922d85ce942f2024c866",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6a:87:75:3e:a2:56",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-954154",
	                        "c38cf1a04b3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-954154 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-954154 logs -n 25: (1.201062619s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-436655 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo containerd config dump                                                                                                                                                                                                  │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo crio config                                                                                                                                                                                                             │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ delete  │ -p bridge-436655                                                                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ delete  │ -p disable-driver-mounts-541137                                                                                                                                                                                                               │ disable-driver-mounts-541137 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ stop    │ -p no-preload-014435 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p no-preload-014435 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p embed-certs-820583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-954154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:27:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:27:58.391740  329454 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:27:58.392007  329454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:27:58.392019  329454 out.go:374] Setting ErrFile to fd 2...
	I1227 20:27:58.392024  329454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:27:58.392200  329454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:27:58.392635  329454 out.go:368] Setting JSON to false
	I1227 20:27:58.394116  329454 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4227,"bootTime":1766863051,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:27:58.394170  329454 start.go:143] virtualization: kvm guest
	I1227 20:27:58.396208  329454 out.go:179] * [no-preload-014435] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:27:58.397353  329454 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:27:58.397404  329454 notify.go:221] Checking for updates...
	I1227 20:27:58.402284  329454 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:27:58.403453  329454 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:27:58.404501  329454 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:27:58.405402  329454 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:27:58.406337  329454 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:27:58.407724  329454 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:27:58.408529  329454 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:27:58.444975  329454 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:27:58.445142  329454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:27:58.513961  329454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 20:27:58.502568044 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:27:58.514126  329454 docker.go:319] overlay module found
	I1227 20:27:58.515782  329454 out.go:179] * Using the docker driver based on existing profile
	I1227 20:27:58.517109  329454 start.go:309] selected driver: docker
	I1227 20:27:58.517125  329454 start.go:928] validating driver "docker" against &{Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:27:58.517223  329454 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:27:58.517823  329454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:27:58.595674  329454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 20:27:58.585875251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:27:58.595934  329454 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:27:58.595977  329454 cni.go:84] Creating CNI manager for ""
	I1227 20:27:58.596033  329454 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:27:58.596069  329454 start.go:353] cluster config:
	{Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:27:58.597961  329454 out.go:179] * Starting "no-preload-014435" primary control-plane node in "no-preload-014435" cluster
	I1227 20:27:58.599203  329454 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:27:58.600331  329454 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:27:58.601362  329454 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:27:58.601447  329454 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:27:58.601483  329454 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/config.json ...
	I1227 20:27:58.601638  329454 cache.go:107] acquiring lock: {Name:mk6e960fa523b2517ada6348a0c0342dcc4edad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601639  329454 cache.go:107] acquiring lock: {Name:mkc7c9b6d0e03c1b5aa41438b1790f395d1e5f80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601723  329454 cache.go:107] acquiring lock: {Name:mk823e851565ecb36a02ad5b6a0d4a7df2dfa5e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601716  329454 cache.go:107] acquiring lock: {Name:mk2782c5d3ecb08952ecec421a44319fef36b52f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601640  329454 cache.go:107] acquiring lock: {Name:mkbf8013e304cf72565565ec73d6e8c841102548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601778  329454 cache.go:107] acquiring lock: {Name:mkbccac0bb664dd93154dd51e6d66db53713b44e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601812  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 20:27:58.601823  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 20:27:58.601733  329454 cache.go:107] acquiring lock: {Name:mkd41fdff83db10f19a9aaf39c82eac8b62c593e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601853  329454 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 220.48µs
	I1227 20:27:58.601870  329454 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 20:27:58.601835  329454 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 218.034µs
	I1227 20:27:58.601879  329454 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 20:27:58.601838  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 20:27:58.601888  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 20:27:58.601895  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 20:27:58.601889  329454 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 263.989µs
	I1227 20:27:58.601900  329454 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 125.03µs
	I1227 20:27:58.601904  329454 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 184.71µs
	I1227 20:27:58.601946  329454 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 20:27:58.601908  329454 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 20:27:58.601934  329454 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 20:27:58.601899  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 20:27:58.601883  329454 cache.go:107] acquiring lock: {Name:mk73abfdc6ada091682c2dbf6848af1c08b22aba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.601970  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1227 20:27:58.601995  329454 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 326.36µs
	I1227 20:27:58.602021  329454 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 20:27:58.601967  329454 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 240.147µs
	I1227 20:27:58.602049  329454 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 20:27:58.602035  329454 cache.go:115] /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 20:27:58.602070  329454 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 234.532µs
	I1227 20:27:58.602085  329454 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22332-10897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 20:27:58.602097  329454 cache.go:87] Successfully saved all images to host disk.
	I1227 20:27:58.622465  329454 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:27:58.622485  329454 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:27:58.622519  329454 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:27:58.622560  329454 start.go:360] acquireMachinesLock for no-preload-014435: {Name:mk1127162727b27a4df39db89b47542aea8edc3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:27:58.622619  329454 start.go:364] duration metric: took 42.355µs to acquireMachinesLock for "no-preload-014435"
	I1227 20:27:58.622640  329454 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:27:58.622647  329454 fix.go:54] fixHost starting: 
	I1227 20:27:58.622858  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:27:58.641704  329454 fix.go:112] recreateIfNeeded on no-preload-014435: state=Stopped err=<nil>
	W1227 20:27:58.641761  329454 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 20:27:55.032619  316262 node_ready.go:57] node "embed-certs-820583" has "Ready":"False" status (will retry)
	W1227 20:27:57.531723  316262 node_ready.go:57] node "embed-certs-820583" has "Ready":"False" status (will retry)
	I1227 20:27:58.532321  316262 node_ready.go:49] node "embed-certs-820583" is "Ready"
	I1227 20:27:58.532356  316262 node_ready.go:38] duration metric: took 13.003722515s for node "embed-certs-820583" to be "Ready" ...
	I1227 20:27:58.532372  316262 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:27:58.532424  316262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:27:58.554172  316262 api_server.go:72] duration metric: took 13.348512181s to wait for apiserver process to appear ...
	I1227 20:27:58.554203  316262 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:27:58.554226  316262 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:27:58.560176  316262 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:27:58.561307  316262 api_server.go:141] control plane version: v1.35.0
	I1227 20:27:58.561335  316262 api_server.go:131] duration metric: took 7.125251ms to wait for apiserver health ...
	I1227 20:27:58.561346  316262 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:27:58.566974  316262 system_pods.go:59] 8 kube-system pods found
	I1227 20:27:58.567040  316262 system_pods.go:61] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:58.567061  316262 system_pods.go:61] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running
	I1227 20:27:58.567078  316262 system_pods.go:61] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:27:58.567094  316262 system_pods.go:61] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running
	I1227 20:27:58.567103  316262 system_pods.go:61] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running
	I1227 20:27:58.567108  316262 system_pods.go:61] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:27:58.567114  316262 system_pods.go:61] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running
	I1227 20:27:58.567121  316262 system_pods.go:61] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:58.567129  316262 system_pods.go:74] duration metric: took 5.775877ms to wait for pod list to return data ...
	I1227 20:27:58.567140  316262 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:27:58.569874  316262 default_sa.go:45] found service account: "default"
	I1227 20:27:58.569898  316262 default_sa.go:55] duration metric: took 2.751528ms for default service account to be created ...
	I1227 20:27:58.569908  316262 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:27:58.573400  316262 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:58.573438  316262 system_pods.go:89] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:58.573445  316262 system_pods.go:89] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running
	I1227 20:27:58.573456  316262 system_pods.go:89] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:27:58.573462  316262 system_pods.go:89] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running
	I1227 20:27:58.573467  316262 system_pods.go:89] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running
	I1227 20:27:58.573472  316262 system_pods.go:89] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:27:58.573477  316262 system_pods.go:89] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running
	I1227 20:27:58.573484  316262 system_pods.go:89] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:58.573522  316262 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 20:27:57.788443  323885 out.go:252]   - Configuring RBAC rules ...
	I1227 20:27:57.788614  323885 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:27:57.792026  323885 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:27:57.796986  323885 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:27:57.799162  323885 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:27:57.802156  323885 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:27:57.804507  323885 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:27:58.148365  323885 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:27:58.574860  323885 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:27:59.148834  323885 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:27:59.150175  323885 kubeadm.go:319] 
	I1227 20:27:59.150283  323885 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:27:59.150304  323885 kubeadm.go:319] 
	I1227 20:27:59.150412  323885 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:27:59.150425  323885 kubeadm.go:319] 
	I1227 20:27:59.150454  323885 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:27:59.150537  323885 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:27:59.150597  323885 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:27:59.150603  323885 kubeadm.go:319] 
	I1227 20:27:59.150657  323885 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:27:59.150663  323885 kubeadm.go:319] 
	I1227 20:27:59.150723  323885 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:27:59.150730  323885 kubeadm.go:319] 
	I1227 20:27:59.150788  323885 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:27:59.150879  323885 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:27:59.150994  323885 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:27:59.151006  323885 kubeadm.go:319] 
	I1227 20:27:59.151111  323885 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:27:59.151210  323885 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:27:59.151216  323885 kubeadm.go:319] 
	I1227 20:27:59.151329  323885 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 6lzbwu.7tkkguqf0vaa8htl \
	I1227 20:27:59.151453  323885 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 \
	I1227 20:27:59.151479  323885 kubeadm.go:319] 	--control-plane 
	I1227 20:27:59.151485  323885 kubeadm.go:319] 
	I1227 20:27:59.151589  323885 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:27:59.151596  323885 kubeadm.go:319] 
	I1227 20:27:59.151695  323885 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 6lzbwu.7tkkguqf0vaa8htl \
	I1227 20:27:59.151828  323885 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 
	I1227 20:27:59.157550  323885 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 20:27:59.157702  323885 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:27:59.157729  323885 cni.go:84] Creating CNI manager for ""
	I1227 20:27:59.157737  323885 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:27:59.160313  323885 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 20:27:56.008117  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:27:58.008191  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	I1227 20:27:59.161331  323885 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:27:59.167491  323885 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:27:59.167505  323885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:27:59.183495  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:27:59.399424  323885 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:27:59.399495  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:59.399529  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-954154 minikube.k8s.io/updated_at=2025_12_27T20_27_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=default-k8s-diff-port-954154 minikube.k8s.io/primary=true
	I1227 20:27:59.477810  323885 ops.go:34] apiserver oom_adj: -16
	I1227 20:27:59.478028  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:59.978740  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:27:58.792013  316262 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:58.792052  316262 system_pods.go:89] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:27:58.792061  316262 system_pods.go:89] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running
	I1227 20:27:58.792070  316262 system_pods.go:89] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:27:58.792083  316262 system_pods.go:89] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running
	I1227 20:27:58.792090  316262 system_pods.go:89] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running
	I1227 20:27:58.792094  316262 system_pods.go:89] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:27:58.792099  316262 system_pods.go:89] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running
	I1227 20:27:58.792116  316262 system_pods.go:89] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:27:59.131577  316262 system_pods.go:86] 8 kube-system pods found
	I1227 20:27:59.131613  316262 system_pods.go:89] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Running
	I1227 20:27:59.131621  316262 system_pods.go:89] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running
	I1227 20:27:59.131626  316262 system_pods.go:89] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:27:59.131632  316262 system_pods.go:89] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running
	I1227 20:27:59.131638  316262 system_pods.go:89] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running
	I1227 20:27:59.131643  316262 system_pods.go:89] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:27:59.131648  316262 system_pods.go:89] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running
	I1227 20:27:59.131655  316262 system_pods.go:89] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Running
	I1227 20:27:59.131666  316262 system_pods.go:126] duration metric: took 561.725895ms to wait for k8s-apps to be running ...
	I1227 20:27:59.131680  316262 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:27:59.131727  316262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:27:59.146222  316262 system_svc.go:56] duration metric: took 14.527264ms WaitForService to wait for kubelet
	I1227 20:27:59.146264  316262 kubeadm.go:587] duration metric: took 13.94061076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:27:59.146302  316262 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:27:59.150048  316262 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:27:59.150079  316262 node_conditions.go:123] node cpu capacity is 8
	I1227 20:27:59.150097  316262 node_conditions.go:105] duration metric: took 3.789408ms to run NodePressure ...
	I1227 20:27:59.150114  316262 start.go:242] waiting for startup goroutines ...
	I1227 20:27:59.150124  316262 start.go:247] waiting for cluster config update ...
	I1227 20:27:59.150136  316262 start.go:256] writing updated cluster config ...
	I1227 20:27:59.150440  316262 ssh_runner.go:195] Run: rm -f paused
	I1227 20:27:59.156973  316262 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:27:59.231789  316262 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nvnjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.236379  316262 pod_ready.go:94] pod "coredns-7d764666f9-nvnjg" is "Ready"
	I1227 20:27:59.236407  316262 pod_ready.go:86] duration metric: took 4.58285ms for pod "coredns-7d764666f9-nvnjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.238737  316262 pod_ready.go:83] waiting for pod "etcd-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.242627  316262 pod_ready.go:94] pod "etcd-embed-certs-820583" is "Ready"
	I1227 20:27:59.242651  316262 pod_ready.go:86] duration metric: took 3.887766ms for pod "etcd-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.244479  316262 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.248307  316262 pod_ready.go:94] pod "kube-apiserver-embed-certs-820583" is "Ready"
	I1227 20:27:59.248323  316262 pod_ready.go:86] duration metric: took 3.793119ms for pod "kube-apiserver-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.250030  316262 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.561183  316262 pod_ready.go:94] pod "kube-controller-manager-embed-certs-820583" is "Ready"
	I1227 20:27:59.561213  316262 pod_ready.go:86] duration metric: took 311.164481ms for pod "kube-controller-manager-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:27:59.761209  316262 pod_ready.go:83] waiting for pod "kube-proxy-srwxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:00.161954  316262 pod_ready.go:94] pod "kube-proxy-srwxn" is "Ready"
	I1227 20:28:00.161982  316262 pod_ready.go:86] duration metric: took 400.748571ms for pod "kube-proxy-srwxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:00.362104  316262 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:00.761580  316262 pod_ready.go:94] pod "kube-scheduler-embed-certs-820583" is "Ready"
	I1227 20:28:00.761605  316262 pod_ready.go:86] duration metric: took 399.47952ms for pod "kube-scheduler-embed-certs-820583" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:00.761616  316262 pod_ready.go:40] duration metric: took 1.604605321s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:00.804718  316262 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:28:00.806219  316262 out.go:179] * Done! kubectl is now configured to use "embed-certs-820583" cluster and "default" namespace by default
	I1227 20:27:58.643809  329454 out.go:252] * Restarting existing docker container for "no-preload-014435" ...
	I1227 20:27:58.643894  329454 cli_runner.go:164] Run: docker start no-preload-014435
	I1227 20:27:58.906753  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:27:58.924785  329454 kic.go:430] container "no-preload-014435" state is running.
	I1227 20:27:58.925214  329454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-014435
	I1227 20:27:58.943582  329454 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/config.json ...
	I1227 20:27:58.943804  329454 machine.go:94] provisionDockerMachine start ...
	I1227 20:27:58.943876  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:27:58.963268  329454 main.go:144] libmachine: Using SSH client type: native
	I1227 20:27:58.963489  329454 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1227 20:27:58.963502  329454 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:27:58.964148  329454 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38088->127.0.0.1:33113: read: connection reset by peer
	I1227 20:28:02.088773  329454 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-014435
	
	I1227 20:28:02.088808  329454 ubuntu.go:182] provisioning hostname "no-preload-014435"
	I1227 20:28:02.088879  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:02.106747  329454 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:02.107034  329454 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1227 20:28:02.107051  329454 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-014435 && echo "no-preload-014435" | sudo tee /etc/hostname
	I1227 20:28:02.237979  329454 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-014435
	
	I1227 20:28:02.238075  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:02.256942  329454 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:02.257149  329454 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1227 20:28:02.257166  329454 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-014435' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-014435/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-014435' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:02.379610  329454 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:02.379637  329454 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:02.379659  329454 ubuntu.go:190] setting up certificates
	I1227 20:28:02.379675  329454 provision.go:84] configureAuth start
	I1227 20:28:02.379723  329454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-014435
	I1227 20:28:02.399177  329454 provision.go:143] copyHostCerts
	I1227 20:28:02.399251  329454 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:02.399269  329454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:02.399362  329454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:02.399491  329454 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:02.399504  329454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:02.399543  329454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:02.399608  329454 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:02.399615  329454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:02.399652  329454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:02.399720  329454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.no-preload-014435 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-014435]
	I1227 20:28:02.506982  329454 provision.go:177] copyRemoteCerts
	I1227 20:28:02.507060  329454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:02.507106  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:02.526229  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:02.621315  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:28:02.639853  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:02.657543  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:28:02.674318  329454 provision.go:87] duration metric: took 294.620848ms to configureAuth
	I1227 20:28:02.674339  329454 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:02.674495  329454 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:02.674589  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:02.693247  329454 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:02.693478  329454 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1227 20:28:02.693496  329454 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:28:03.032856  329454 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:03.032880  329454 machine.go:97] duration metric: took 4.089060818s to provisionDockerMachine
	I1227 20:28:03.032895  329454 start.go:293] postStartSetup for "no-preload-014435" (driver="docker")
	I1227 20:28:03.032907  329454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:03.033019  329454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:03.033072  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:03.055364  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:03.147720  329454 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:03.151360  329454 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:03.151389  329454 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:03.151398  329454 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:03.151445  329454 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:03.151550  329454 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:03.151673  329454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:03.158902  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:03.175420  329454 start.go:296] duration metric: took 142.512685ms for postStartSetup
	I1227 20:28:03.175476  329454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:03.175508  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:03.193780  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:03.281945  329454 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:03.286276  329454 fix.go:56] duration metric: took 4.663623643s for fixHost
	I1227 20:28:03.286311  329454 start.go:83] releasing machines lock for "no-preload-014435", held for 4.663679303s
	I1227 20:28:03.286375  329454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-014435
	I1227 20:28:03.305955  329454 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:03.305981  329454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:03.306009  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:03.306056  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:03.324166  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:03.324721  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:00.479146  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:00.978036  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:01.479120  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:01.978076  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:02.478123  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:02.979106  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:03.478040  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:03.978717  323885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:28:04.073622  323885 kubeadm.go:1114] duration metric: took 4.674196665s to wait for elevateKubeSystemPrivileges
	I1227 20:28:04.073654  323885 kubeadm.go:403] duration metric: took 11.415089879s to StartCluster
	I1227 20:28:04.073675  323885 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:04.073735  323885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:04.077127  323885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:04.077491  323885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:28:04.077907  323885 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:04.078568  323885 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:04.078736  323885 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:28:04.078825  323885 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:04.078842  323885 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-954154"
	I1227 20:28:04.078862  323885 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:04.078892  323885 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-954154"
	I1227 20:28:04.079308  323885 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:04.078870  323885 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:04.079932  323885 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:04.080060  323885 out.go:179] * Verifying Kubernetes components...
	I1227 20:28:04.082150  323885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:04.113712  323885 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:28:03.413293  329454 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:03.477068  329454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:28:03.515886  329454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:03.521141  329454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:03.521191  329454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:03.529551  329454 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:28:03.529575  329454 start.go:496] detecting cgroup driver to use...
	I1227 20:28:03.529607  329454 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:03.529655  329454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:03.546287  329454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:03.558529  329454 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:03.558578  329454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:03.572326  329454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:03.584386  329454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:03.670995  329454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:03.769358  329454 docker.go:234] disabling docker service ...
	I1227 20:28:03.769412  329454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:03.784321  329454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:03.797777  329454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:03.891171  329454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:04.009325  329454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:04.028089  329454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:04.047996  329454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:04.048059  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.060036  329454 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:04.060150  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.071235  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.087432  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.107201  329454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:04.122524  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.143217  329454 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.158947  329454 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:04.177157  329454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:04.191376  329454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:04.208640  329454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:04.356227  329454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:04.573538  329454 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:04.573633  329454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:04.580310  329454 start.go:574] Will wait 60s for crictl version
	I1227 20:28:04.580505  329454 ssh_runner.go:195] Run: which crictl
	I1227 20:28:04.585791  329454 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:04.625304  329454 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:04.625638  329454 ssh_runner.go:195] Run: crio --version
	I1227 20:28:04.670742  329454 ssh_runner.go:195] Run: crio --version
	I1227 20:28:04.715438  329454 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:04.113774  323885 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-954154"
	I1227 20:28:04.113819  323885 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:04.114317  323885 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:04.114940  323885 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:04.114960  323885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:28:04.115014  323885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:04.145561  323885 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:04.145559  323885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:04.145583  323885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:28:04.145640  323885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:04.180539  323885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:04.224052  323885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:28:04.293851  323885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:04.298619  323885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:04.331712  323885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:04.490813  323885 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1227 20:28:04.492602  323885 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-954154" to be "Ready" ...
	I1227 20:28:04.770837  323885 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1227 20:28:00.507726  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:02.508399  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:04.514656  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	I1227 20:28:04.773145  323885 addons.go:530] duration metric: took 694.410939ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 20:28:04.998166  323885 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-954154" context rescaled to 1 replicas
	I1227 20:28:04.717114  329454 cli_runner.go:164] Run: docker network inspect no-preload-014435 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:04.742364  329454 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:04.747450  329454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:04.761467  329454 kubeadm.go:884] updating cluster {Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:04.761649  329454 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:04.761703  329454 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:04.801231  329454 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:04.801257  329454 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:04.801266  329454 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I1227 20:28:04.801395  329454 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-014435 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:04.801483  329454 ssh_runner.go:195] Run: crio config
	I1227 20:28:04.870893  329454 cni.go:84] Creating CNI manager for ""
	I1227 20:28:04.870930  329454 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:04.870948  329454 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:28:04.870979  329454 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-014435 NodeName:no-preload-014435 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:04.871151  329454 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-014435"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:04.871225  329454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:04.882031  329454 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:04.882093  329454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:04.891866  329454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 20:28:04.907282  329454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:04.922653  329454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1227 20:28:04.939231  329454 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:04.943933  329454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:04.956280  329454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:05.079343  329454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:05.113527  329454 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435 for IP: 192.168.94.2
	I1227 20:28:05.113551  329454 certs.go:195] generating shared ca certs ...
	I1227 20:28:05.113574  329454 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:05.113745  329454 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:05.113813  329454 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:05.113826  329454 certs.go:257] generating profile certs ...
	I1227 20:28:05.113978  329454 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/client.key
	I1227 20:28:05.114070  329454 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key.00c17d97
	I1227 20:28:05.114126  329454 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.key
	I1227 20:28:05.114270  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:05.114339  329454 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:05.114350  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:05.114381  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:05.114409  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:05.114437  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:05.114503  329454 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:05.115253  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:05.141671  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:05.167267  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:05.191382  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:05.221277  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:28:05.247367  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:28:05.268782  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:05.289803  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/no-preload-014435/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:28:05.314755  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:05.337703  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:05.361403  329454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:05.383777  329454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:05.399582  329454 ssh_runner.go:195] Run: openssl version
	I1227 20:28:05.407644  329454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:05.417281  329454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:05.426700  329454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:05.431087  329454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:05.431148  329454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:05.492545  329454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:05.503787  329454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:05.514315  329454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:05.525106  329454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:05.530164  329454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:05.530223  329454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:05.590565  329454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:05.601554  329454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:05.613568  329454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:05.624387  329454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:05.630188  329454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:05.630264  329454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:05.690345  329454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:05.701622  329454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:05.706974  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:28:05.770449  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:28:05.835600  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:28:05.897669  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:28:05.959856  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:28:06.021309  329454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:28:06.082496  329454 kubeadm.go:401] StartCluster: {Name:no-preload-014435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-014435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:06.082619  329454 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:06.082776  329454 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:06.130036  329454 cri.go:96] found id: "ccbaca54231342c2e05e7f6f39601d2b1cde71a9e688363aa99cd9f16b7b740b"
	I1227 20:28:06.130068  329454 cri.go:96] found id: "7e08e2ae41d9f45611fbcd90fd0c42176961f4b3b41557748ee9cae592b1a0c6"
	I1227 20:28:06.130074  329454 cri.go:96] found id: "a18c71b2140b36f980a1c87567071c8701b3e5f8aa4ab2f6fb52beb4656e9bd2"
	I1227 20:28:06.130079  329454 cri.go:96] found id: "455bcec8a175a162715bf84ffdfb0f031b741ff15312c1ed41956c3dafb97b6e"
	I1227 20:28:06.130083  329454 cri.go:96] found id: ""
	I1227 20:28:06.130126  329454 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:28:06.146202  329454 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:06Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:28:06.146265  329454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:06.156422  329454 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:28:06.156524  329454 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:28:06.156592  329454 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:28:06.166650  329454 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:28:06.167995  329454 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-014435" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:06.169027  329454 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-014435" cluster setting kubeconfig missing "no-preload-014435" context setting]
	I1227 20:28:06.170510  329454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:06.172930  329454 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:28:06.185259  329454 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1227 20:28:06.185294  329454 kubeadm.go:602] duration metric: took 28.756088ms to restartPrimaryControlPlane
	I1227 20:28:06.185306  329454 kubeadm.go:403] duration metric: took 102.822543ms to StartCluster
	I1227 20:28:06.185322  329454 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:06.185380  329454 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:06.187760  329454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:06.188033  329454 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:06.188181  329454 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:28:06.188287  329454 addons.go:70] Setting storage-provisioner=true in profile "no-preload-014435"
	I1227 20:28:06.188319  329454 addons.go:239] Setting addon storage-provisioner=true in "no-preload-014435"
	W1227 20:28:06.188332  329454 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:28:06.188367  329454 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:06.188421  329454 addons.go:70] Setting dashboard=true in profile "no-preload-014435"
	I1227 20:28:06.188442  329454 addons.go:239] Setting addon dashboard=true in "no-preload-014435"
	W1227 20:28:06.188450  329454 addons.go:248] addon dashboard should already be in state true
	I1227 20:28:06.188463  329454 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:28:06.188482  329454 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:28:06.188641  329454 addons.go:70] Setting default-storageclass=true in profile "no-preload-014435"
	I1227 20:28:06.188670  329454 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-014435"
	I1227 20:28:06.189012  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:28:06.189035  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:28:06.189125  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:28:06.193223  329454 out.go:179] * Verifying Kubernetes components...
	I1227 20:28:06.194649  329454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:06.220577  329454 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:28:06.220577  329454 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:28:06.221053  329454 addons.go:239] Setting addon default-storageclass=true in "no-preload-014435"
	W1227 20:28:06.221074  329454 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:28:06.221103  329454 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:28:06.221581  329454 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:28:06.221887  329454 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:06.221905  329454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:28:06.221989  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:06.225563  329454 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:28:06.229144  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:28:06.229166  329454 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:28:06.229237  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:06.253516  329454 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:06.253555  329454 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:28:06.253616  329454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:28:06.253829  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:06.266670  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:06.289141  329454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:28:06.377764  329454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:06.378960  329454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:06.386166  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:28:06.386189  329454 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:28:06.395638  329454 node_ready.go:35] waiting up to 6m0s for node "no-preload-014435" to be "Ready" ...
	I1227 20:28:06.401524  329454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:06.404844  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:28:06.404865  329454 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:28:06.422207  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:28:06.422229  329454 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:28:06.441471  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:28:06.441495  329454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:28:06.462189  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:28:06.462214  329454 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:28:06.476955  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:28:06.476980  329454 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:28:06.490295  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:28:06.490321  329454 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:28:06.504701  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:28:06.504735  329454 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:28:06.519137  329454 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:06.519161  329454 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:28:06.532459  329454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:07.790104  329454 node_ready.go:49] node "no-preload-014435" is "Ready"
	I1227 20:28:07.790143  329454 node_ready.go:38] duration metric: took 1.394439888s for node "no-preload-014435" to be "Ready" ...
	I1227 20:28:07.790163  329454 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:28:07.790220  329454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:28:08.456038  329454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.077018471s)
	I1227 20:28:08.456108  329454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.054533452s)
	I1227 20:28:08.456266  329454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.923758762s)
	I1227 20:28:08.456300  329454 api_server.go:72] duration metric: took 2.268211493s to wait for apiserver process to appear ...
	I1227 20:28:08.456315  329454 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:28:08.456359  329454 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 20:28:08.460388  329454 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-014435 addons enable metrics-server
	
	I1227 20:28:08.463375  329454 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:08.463402  329454 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:08.468895  329454 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1227 20:28:07.013381  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:09.509864  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:06.496819  323885 node_ready.go:57] node "default-k8s-diff-port-954154" has "Ready":"False" status (will retry)
	W1227 20:28:08.996757  323885 node_ready.go:57] node "default-k8s-diff-port-954154" has "Ready":"False" status (will retry)
	I1227 20:28:08.470038  329454 addons.go:530] duration metric: took 2.281869861s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:28:08.957119  329454 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 20:28:08.962984  329454 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:08.963018  329454 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:09.456435  329454 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1227 20:28:09.462323  329454 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1227 20:28:09.463432  329454 api_server.go:141] control plane version: v1.35.0
	I1227 20:28:09.463462  329454 api_server.go:131] duration metric: took 1.007140471s to wait for apiserver health ...
	I1227 20:28:09.463473  329454 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:28:09.467153  329454 system_pods.go:59] 8 kube-system pods found
	I1227 20:28:09.467189  329454 system_pods.go:61] "coredns-7d764666f9-nvrq6" [ca55daec-8a25-48b5-ace0-eeb5441b6174] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:09.467200  329454 system_pods.go:61] "etcd-no-preload-014435" [efdf1420-2f4c-4882-8706-04e34727a803] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:09.467208  329454 system_pods.go:61] "kindnet-7pgwz" [a1f9fadf-b5dd-472d-bffe-f8a555aa44c9] Running
	I1227 20:28:09.467218  329454 system_pods.go:61] "kube-apiserver-no-preload-014435" [6a20b877-1c1b-4f46-b4dc-32a6ed3a3b68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:09.467230  329454 system_pods.go:61] "kube-controller-manager-no-preload-014435" [e9d451ba-0129-47a3-b703-df1bfd83631d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:09.467245  329454 system_pods.go:61] "kube-proxy-ctvzq" [8db29263-ce40-4df9-9316-781104ff2dd5] Running
	I1227 20:28:09.467253  329454 system_pods.go:61] "kube-scheduler-no-preload-014435" [7a4391d1-1151-47d8-9f3e-3be45844ec85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:09.467261  329454 system_pods.go:61] "storage-provisioner" [dcd68309-2ed4-4177-b826-fe8649b75bbd] Running
	I1227 20:28:09.467275  329454 system_pods.go:74] duration metric: took 3.788381ms to wait for pod list to return data ...
	I1227 20:28:09.467288  329454 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:28:09.470018  329454 default_sa.go:45] found service account: "default"
	I1227 20:28:09.470048  329454 default_sa.go:55] duration metric: took 2.753108ms for default service account to be created ...
	I1227 20:28:09.470058  329454 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:28:09.473102  329454 system_pods.go:86] 8 kube-system pods found
	I1227 20:28:09.473132  329454 system_pods.go:89] "coredns-7d764666f9-nvrq6" [ca55daec-8a25-48b5-ace0-eeb5441b6174] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:09.473141  329454 system_pods.go:89] "etcd-no-preload-014435" [efdf1420-2f4c-4882-8706-04e34727a803] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:09.473148  329454 system_pods.go:89] "kindnet-7pgwz" [a1f9fadf-b5dd-472d-bffe-f8a555aa44c9] Running
	I1227 20:28:09.473157  329454 system_pods.go:89] "kube-apiserver-no-preload-014435" [6a20b877-1c1b-4f46-b4dc-32a6ed3a3b68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:09.473175  329454 system_pods.go:89] "kube-controller-manager-no-preload-014435" [e9d451ba-0129-47a3-b703-df1bfd83631d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:09.473181  329454 system_pods.go:89] "kube-proxy-ctvzq" [8db29263-ce40-4df9-9316-781104ff2dd5] Running
	I1227 20:28:09.473189  329454 system_pods.go:89] "kube-scheduler-no-preload-014435" [7a4391d1-1151-47d8-9f3e-3be45844ec85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:09.473194  329454 system_pods.go:89] "storage-provisioner" [dcd68309-2ed4-4177-b826-fe8649b75bbd] Running
	I1227 20:28:09.473202  329454 system_pods.go:126] duration metric: took 3.137569ms to wait for k8s-apps to be running ...
	I1227 20:28:09.473210  329454 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:28:09.473249  329454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:09.490134  329454 system_svc.go:56] duration metric: took 16.91621ms WaitForService to wait for kubelet
	I1227 20:28:09.490339  329454 kubeadm.go:587] duration metric: took 3.302269672s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:28:09.490398  329454 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:28:09.493778  329454 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:28:09.493814  329454 node_conditions.go:123] node cpu capacity is 8
	I1227 20:28:09.493832  329454 node_conditions.go:105] duration metric: took 3.406809ms to run NodePressure ...
	I1227 20:28:09.493847  329454 start.go:242] waiting for startup goroutines ...
	I1227 20:28:09.493862  329454 start.go:247] waiting for cluster config update ...
	I1227 20:28:09.493879  329454 start.go:256] writing updated cluster config ...
	I1227 20:28:09.494198  329454 ssh_runner.go:195] Run: rm -f paused
	I1227 20:28:09.499315  329454 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:09.503872  329454 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nvrq6" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:28:11.509062  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:12.008192  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:14.507318  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:11.495208  323885 node_ready.go:57] node "default-k8s-diff-port-954154" has "Ready":"False" status (will retry)
	W1227 20:28:13.496471  323885 node_ready.go:57] node "default-k8s-diff-port-954154" has "Ready":"False" status (will retry)
	W1227 20:28:13.510690  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:16.009347  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:16.509213  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:18.509436  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:15.995879  323885 node_ready.go:57] node "default-k8s-diff-port-954154" has "Ready":"False" status (will retry)
	I1227 20:28:17.995866  323885 node_ready.go:49] node "default-k8s-diff-port-954154" is "Ready"
	I1227 20:28:17.995934  323885 node_ready.go:38] duration metric: took 13.503275534s for node "default-k8s-diff-port-954154" to be "Ready" ...
	I1227 20:28:17.995953  323885 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:28:17.996010  323885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:28:18.012263  323885 api_server.go:72] duration metric: took 13.933656017s to wait for apiserver process to appear ...
	I1227 20:28:18.012292  323885 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:28:18.012321  323885 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:18.017514  323885 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1227 20:28:18.018684  323885 api_server.go:141] control plane version: v1.35.0
	I1227 20:28:18.018705  323885 api_server.go:131] duration metric: took 6.406785ms to wait for apiserver health ...
	I1227 20:28:18.018713  323885 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:28:18.022395  323885 system_pods.go:59] 8 kube-system pods found
	I1227 20:28:18.022437  323885 system_pods.go:61] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:18.022444  323885 system_pods.go:61] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running
	I1227 20:28:18.022458  323885 system_pods.go:61] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running
	I1227 20:28:18.022465  323885 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running
	I1227 20:28:18.022470  323885 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running
	I1227 20:28:18.022475  323885 system_pods.go:61] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running
	I1227 20:28:18.022483  323885 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:18.022490  323885 system_pods.go:61] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:28:18.022497  323885 system_pods.go:74] duration metric: took 3.778842ms to wait for pod list to return data ...
	I1227 20:28:18.022506  323885 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:28:18.024507  323885 default_sa.go:45] found service account: "default"
	I1227 20:28:18.024525  323885 default_sa.go:55] duration metric: took 2.01115ms for default service account to be created ...
	I1227 20:28:18.024538  323885 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:28:18.026985  323885 system_pods.go:86] 8 kube-system pods found
	I1227 20:28:18.027008  323885 system_pods.go:89] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:18.027013  323885 system_pods.go:89] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running
	I1227 20:28:18.027019  323885 system_pods.go:89] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running
	I1227 20:28:18.027023  323885 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running
	I1227 20:28:18.027027  323885 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running
	I1227 20:28:18.027031  323885 system_pods.go:89] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running
	I1227 20:28:18.027038  323885 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:18.027046  323885 system_pods.go:89] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:28:18.027067  323885 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 20:28:18.322803  323885 system_pods.go:86] 8 kube-system pods found
	I1227 20:28:18.322864  323885 system_pods.go:89] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:18.322873  323885 system_pods.go:89] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running
	I1227 20:28:18.322879  323885 system_pods.go:89] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running
	I1227 20:28:18.322883  323885 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running
	I1227 20:28:18.322893  323885 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running
	I1227 20:28:18.322898  323885 system_pods.go:89] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running
	I1227 20:28:18.322904  323885 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:18.322937  323885 system_pods.go:89] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:28:18.692672  323885 system_pods.go:86] 8 kube-system pods found
	I1227 20:28:18.692701  323885 system_pods.go:89] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Running
	I1227 20:28:18.692707  323885 system_pods.go:89] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running
	I1227 20:28:18.692711  323885 system_pods.go:89] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running
	I1227 20:28:18.692715  323885 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running
	I1227 20:28:18.692718  323885 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running
	I1227 20:28:18.692722  323885 system_pods.go:89] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running
	I1227 20:28:18.692727  323885 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running
	I1227 20:28:18.692733  323885 system_pods.go:89] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Running
	I1227 20:28:18.692742  323885 system_pods.go:126] duration metric: took 668.195808ms to wait for k8s-apps to be running ...
	I1227 20:28:18.692756  323885 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:28:18.692800  323885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:18.705541  323885 system_svc.go:56] duration metric: took 12.775966ms WaitForService to wait for kubelet
	I1227 20:28:18.705565  323885 kubeadm.go:587] duration metric: took 14.626964207s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:28:18.705581  323885 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:28:18.708246  323885 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:28:18.708268  323885 node_conditions.go:123] node cpu capacity is 8
	I1227 20:28:18.708281  323885 node_conditions.go:105] duration metric: took 2.695403ms to run NodePressure ...
	I1227 20:28:18.708291  323885 start.go:242] waiting for startup goroutines ...
	I1227 20:28:18.708304  323885 start.go:247] waiting for cluster config update ...
	I1227 20:28:18.708313  323885 start.go:256] writing updated cluster config ...
	I1227 20:28:18.708540  323885 ssh_runner.go:195] Run: rm -f paused
	I1227 20:28:18.712054  323885 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:18.715108  323885 pod_ready.go:83] waiting for pod "coredns-7d764666f9-gtzdb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:18.718830  323885 pod_ready.go:94] pod "coredns-7d764666f9-gtzdb" is "Ready"
	I1227 20:28:18.718847  323885 pod_ready.go:86] duration metric: took 3.721075ms for pod "coredns-7d764666f9-gtzdb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:18.720658  323885 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:18.724071  323885 pod_ready.go:94] pod "etcd-default-k8s-diff-port-954154" is "Ready"
	I1227 20:28:18.724095  323885 pod_ready.go:86] duration metric: took 3.416367ms for pod "etcd-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:18.725774  323885 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:18.728881  323885 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-954154" is "Ready"
	I1227 20:28:18.728897  323885 pod_ready.go:86] duration metric: took 3.1064ms for pod "kube-apiserver-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:18.730641  323885 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:19.116503  323885 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-954154" is "Ready"
	I1227 20:28:19.116535  323885 pod_ready.go:86] duration metric: took 385.870583ms for pod "kube-controller-manager-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:19.317325  323885 pod_ready.go:83] waiting for pod "kube-proxy-m5zcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:19.716069  323885 pod_ready.go:94] pod "kube-proxy-m5zcc" is "Ready"
	I1227 20:28:19.716096  323885 pod_ready.go:86] duration metric: took 398.740828ms for pod "kube-proxy-m5zcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:19.915767  323885 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:20.315852  323885 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-954154" is "Ready"
	I1227 20:28:20.315873  323885 pod_ready.go:86] duration metric: took 400.080038ms for pod "kube-scheduler-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:20.315884  323885 pod_ready.go:40] duration metric: took 1.603802515s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:20.359460  323885 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:28:20.361010  323885 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-954154" cluster and "default" namespace by default
	W1227 20:28:18.511954  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:21.009404  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:21.007398  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	W1227 20:28:23.508021  323695 pod_ready.go:104] pod "coredns-5dd5756b68-lklgt" is not "Ready", error: <nil>
	I1227 20:28:25.507368  323695 pod_ready.go:94] pod "coredns-5dd5756b68-lklgt" is "Ready"
	I1227 20:28:25.507397  323695 pod_ready.go:86] duration metric: took 33.505361421s for pod "coredns-5dd5756b68-lklgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:25.510059  323695 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:25.514194  323695 pod_ready.go:94] pod "etcd-old-k8s-version-762177" is "Ready"
	I1227 20:28:25.514213  323695 pod_ready.go:86] duration metric: took 4.133836ms for pod "etcd-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:25.516805  323695 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:25.520519  323695 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-762177" is "Ready"
	I1227 20:28:25.520541  323695 pod_ready.go:86] duration metric: took 3.718505ms for pod "kube-apiserver-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:25.524776  323695 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:25.705375  323695 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-762177" is "Ready"
	I1227 20:28:25.705399  323695 pod_ready.go:86] duration metric: took 180.606334ms for pod "kube-controller-manager-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:25.905886  323695 pod_ready.go:83] waiting for pod "kube-proxy-99q8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:26.305948  323695 pod_ready.go:94] pod "kube-proxy-99q8t" is "Ready"
	I1227 20:28:26.305974  323695 pod_ready.go:86] duration metric: took 400.040426ms for pod "kube-proxy-99q8t" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:26.506571  323695 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:26.905693  323695 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-762177" is "Ready"
	I1227 20:28:26.905717  323695 pod_ready.go:86] duration metric: took 399.118073ms for pod "kube-scheduler-old-k8s-version-762177" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:26.905728  323695 pod_ready.go:40] duration metric: took 34.908067751s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:26.952323  323695 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1227 20:28:26.953563  323695 out.go:203] 
	W1227 20:28:26.955346  323695 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 20:28:26.956529  323695 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:28:26.959343  323695 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-762177" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:28:17 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:17.970448306Z" level=info msg="Starting container: 8f674117d42d3489517e33f38d26f6b2d18df2ce8a516282afc57b08d13a65ed" id=614570e5-e82e-42da-856d-f14a73b3b015 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:17 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:17.972451789Z" level=info msg="Started container" PID=1888 containerID=8f674117d42d3489517e33f38d26f6b2d18df2ce8a516282afc57b08d13a65ed description=kube-system/coredns-7d764666f9-gtzdb/coredns id=614570e5-e82e-42da-856d-f14a73b3b015 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3e95ed1936b69c6c0da7ffae045f347697d1d1c73955bb72f7112bd0f9ad796
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.85141414Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ea5182ab-7840-46b8-b92e-d3acd48d1c8a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.851506973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.856151144Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cd5b3128d4ede91d481ca5b1fe669a587d1f28cd30dbd4d6ba7547fb7a50cb9b UID:d25d862a-9040-4a22-935d-4e6d3eac79d1 NetNS:/var/run/netns/2b193672-5677-4511-9810-2de726326940 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000694370}] Aliases:map[]}"
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.856175872Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.866052111Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cd5b3128d4ede91d481ca5b1fe669a587d1f28cd30dbd4d6ba7547fb7a50cb9b UID:d25d862a-9040-4a22-935d-4e6d3eac79d1 NetNS:/var/run/netns/2b193672-5677-4511-9810-2de726326940 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000694370}] Aliases:map[]}"
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.866204259Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.867579776Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.868734987Z" level=info msg="Ran pod sandbox cd5b3128d4ede91d481ca5b1fe669a587d1f28cd30dbd4d6ba7547fb7a50cb9b with infra container: default/busybox/POD" id=ea5182ab-7840-46b8-b92e-d3acd48d1c8a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.870074335Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0e646e15-4978-4bc2-aac9-2481e6f9b6c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.870197558Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0e646e15-4978-4bc2-aac9-2481e6f9b6c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.870230761Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0e646e15-4978-4bc2-aac9-2481e6f9b6c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.871000088Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=40e61787-99d8-445b-bbbe-045182bde21f name=/runtime.v1.ImageService/PullImage
	Dec 27 20:28:20 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:20.872622374Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.452026427Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=40e61787-99d8-445b-bbbe-045182bde21f name=/runtime.v1.ImageService/PullImage
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.452578063Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b2d2e82c-1895-4cb8-aa4e-04529e505d9f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.453986088Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a6b3060c-5b2e-4947-ad9d-de0729e0d9a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.457201061Z" level=info msg="Creating container: default/busybox/busybox" id=f735c084-8f68-453e-9508-d9e3a22d7a3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.457300104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.460976335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.461378234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.486240887Z" level=info msg="Created container 2fb54846e456434ba52e61e49ed7cc9b9816f2f123cd8303894628802edd2785: default/busybox/busybox" id=f735c084-8f68-453e-9508-d9e3a22d7a3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.486723269Z" level=info msg="Starting container: 2fb54846e456434ba52e61e49ed7cc9b9816f2f123cd8303894628802edd2785" id=95a1ccf2-47fd-4eca-87e9-93655dc48d83 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:21 default-k8s-diff-port-954154 crio[779]: time="2025-12-27T20:28:21.488512435Z" level=info msg="Started container" PID=1970 containerID=2fb54846e456434ba52e61e49ed7cc9b9816f2f123cd8303894628802edd2785 description=default/busybox/busybox id=95a1ccf2-47fd-4eca-87e9-93655dc48d83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd5b3128d4ede91d481ca5b1fe669a587d1f28cd30dbd4d6ba7547fb7a50cb9b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2fb54846e4564       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   cd5b3128d4ede       busybox                                                default
	8f674117d42d3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      10 seconds ago      Running             coredns                   0                   d3e95ed1936b6       coredns-7d764666f9-gtzdb                               kube-system
	c08728111bdbf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   d24f6cad5731b       storage-provisioner                                    kube-system
	5eea9e99ced4e       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    21 seconds ago      Running             kindnet-cni               0                   f019853b51543       kindnet-c9zm9                                          kube-system
	486d833fc9ddc       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      23 seconds ago      Running             kube-proxy                0                   65c80c4c223e8       kube-proxy-m5zcc                                       kube-system
	e9c2164941b06       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      33 seconds ago      Running             etcd                      0                   c112e639de5a4       etcd-default-k8s-diff-port-954154                      kube-system
	bf90a1d4350dc       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      33 seconds ago      Running             kube-apiserver            0                   39bb2bf32fb67       kube-apiserver-default-k8s-diff-port-954154            kube-system
	671a672354dd1       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      33 seconds ago      Running             kube-scheduler            0                   de341de14e36f       kube-scheduler-default-k8s-diff-port-954154            kube-system
	bab6fa7eab080       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      33 seconds ago      Running             kube-controller-manager   0                   fb06faebe1649       kube-controller-manager-default-k8s-diff-port-954154   kube-system
	
	
	==> coredns [8f674117d42d3489517e33f38d26f6b2d18df2ce8a516282afc57b08d13a65ed] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46700 - 33399 "HINFO IN 1628651669883366263.4379873537695836001. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.134816759s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-954154
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-954154
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=default-k8s-diff-port-954154
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_27_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:27:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-954154
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:28:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:28:17 +0000   Sat, 27 Dec 2025 20:27:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:28:17 +0000   Sat, 27 Dec 2025 20:27:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:28:17 +0000   Sat, 27 Dec 2025 20:27:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:28:17 +0000   Sat, 27 Dec 2025 20:28:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-954154
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                7ca85da6-448a-4be6-8ab2-a8891caf574d
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-7d764666f9-gtzdb                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-954154                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-c9zm9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-954154             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-954154    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-m5zcc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-954154             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node default-k8s-diff-port-954154 event: Registered Node default-k8s-diff-port-954154 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [e9c2164941b06f837f7f6c11ed8eb60a8ea46819001b16f24fbbec2247830a4a] <==
	{"level":"info","ts":"2025-12-27T20:27:54.989570Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:27:55.282894Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T20:27:55.282996Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T20:27:55.283053Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-27T20:27:55.283073Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:27:55.283092Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:55.283683Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:55.283709Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:27:55.283732Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:55.283744Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:55.284270Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:27:55.284737Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-954154 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:27:55.284742Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:27:55.284765Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:27:55.284947Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:27:55.284986Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:27:55.285003Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:27:55.285071Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:27:55.285123Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:27:55.285166Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:27:55.285292Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T20:27:55.285952Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:27:55.286170Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:27:55.289506Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:27:55.289536Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 20:28:28 up  1:10,  0 user,  load average: 3.77, 3.24, 2.25
	Linux default-k8s-diff-port-954154 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5eea9e99ced4e1dae4fd311a10d612e68030fd1cb0e0df00152e1ad9674a238a] <==
	I1227 20:28:06.745035       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:28:06.745256       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 20:28:06.745403       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:28:06.745421       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:28:06.745442       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:28:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:28:06.949089       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:28:06.949137       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:28:06.949150       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:28:07.040879       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:28:07.440443       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:28:07.440484       1 metrics.go:72] Registering metrics
	I1227 20:28:07.440568       1 controller.go:711] "Syncing nftables rules"
	I1227 20:28:16.950041       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:28:16.950132       1 main.go:301] handling current node
	I1227 20:28:26.953079       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:28:26.953123       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bf90a1d4350dcb6859d97a06cf5df6813414abf4285274c646f279ff2a3c70f0] <==
	I1227 20:27:56.231505       1 cache.go:39] Caches are synced for autoregister controller
	E1227 20:27:56.271630       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1227 20:27:56.320254       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:27:56.321401       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:27:56.321406       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:27:56.325252       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:27:56.421396       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:27:57.121542       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:27:57.125295       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:27:57.125312       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:27:57.548743       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:27:57.582592       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:27:57.626566       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:27:57.633137       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1227 20:27:57.634563       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:27:57.639230       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:27:58.142969       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:27:58.563307       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:27:58.573798       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:27:58.583215       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:28:03.746262       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:28:03.752188       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:28:03.946273       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:28:04.154676       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1227 20:28:26.631343       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:35876: use of closed network connection
	
	
	==> kube-controller-manager [bab6fa7eab080d1923caa3f560e737a25511f09c133582a3538b37c3adfb37fe] <==
	I1227 20:28:02.950818       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.951321       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.950835       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.950827       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.950963       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.951676       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.950978       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.950989       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.951862       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:28:02.951989       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-954154"
	I1227 20:28:02.952081       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 20:28:02.950790       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.950968       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.951877       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.950955       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.952280       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.951885       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:02.955099       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:02.957831       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-954154" podCIDRs=["10.244.0.0/24"]
	I1227 20:28:02.959403       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:03.054930       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:03.054952       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:28:03.054959       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:28:03.055228       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:17.954790       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [486d833fc9ddcf36e40df387ed34633e5330e53776320ada48c6979398936894] <==
	I1227 20:28:04.659094       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:28:04.725573       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:04.826518       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:04.826567       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 20:28:04.826718       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:28:04.850744       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:28:04.850842       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:28:04.857737       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:28:04.858235       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:28:04.858342       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:04.860075       1 config.go:309] "Starting node config controller"
	I1227 20:28:04.860130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:28:04.860395       1 config.go:200] "Starting service config controller"
	I1227 20:28:04.860417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:28:04.860436       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:28:04.860454       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:28:04.860469       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:28:04.860483       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:28:04.960690       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:28:04.960690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:28:04.960714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:28:04.960714       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [671a672354dd169b23dee265320bf79f16a1cf2ba42a82d6544c991a83ce6bf2] <==
	E1227 20:27:56.171531       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:27:56.172621       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:27:56.172797       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:27:56.172954       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:27:56.173033       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:27:56.173498       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:27:56.173504       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:27:56.173559       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:27:56.173616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:27:56.173642       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:27:56.173641       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:27:56.173721       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:27:56.173734       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:27:56.173748       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:27:56.996869       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:27:57.009190       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:27:57.094577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:27:57.112512       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:27:57.147399       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:27:57.218431       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:27:57.286824       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:27:57.296874       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:27:57.355962       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:27:57.377349       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1227 20:27:59.867397       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:28:04 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:04.257743    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ca10db8-75c1-459b-b3ec-bdb128f9d72a-xtables-lock\") pod \"kube-proxy-m5zcc\" (UID: \"2ca10db8-75c1-459b-b3ec-bdb128f9d72a\") " pod="kube-system/kube-proxy-m5zcc"
	Dec 27 20:28:04 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:04.257773    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ca10db8-75c1-459b-b3ec-bdb128f9d72a-lib-modules\") pod \"kube-proxy-m5zcc\" (UID: \"2ca10db8-75c1-459b-b3ec-bdb128f9d72a\") " pod="kube-system/kube-proxy-m5zcc"
	Dec 27 20:28:04 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:04.257799    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdbwv\" (UniqueName: \"kubernetes.io/projected/2ca10db8-75c1-459b-b3ec-bdb128f9d72a-kube-api-access-hdbwv\") pod \"kube-proxy-m5zcc\" (UID: \"2ca10db8-75c1-459b-b3ec-bdb128f9d72a\") " pod="kube-system/kube-proxy-m5zcc"
	Dec 27 20:28:05 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:05.056970    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-954154" containerName="etcd"
	Dec 27 20:28:05 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:05.487854    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-m5zcc" podStartSLOduration=1.487840047 podStartE2EDuration="1.487840047s" podCreationTimestamp="2025-12-27 20:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:28:05.48749974 +0000 UTC m=+7.155685594" watchObservedRunningTime="2025-12-27 20:28:05.487840047 +0000 UTC m=+7.156025900"
	Dec 27 20:28:07 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:07.625179    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-954154" containerName="kube-apiserver"
	Dec 27 20:28:07 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:07.647675    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-c9zm9" podStartSLOduration=1.619713258 podStartE2EDuration="3.647654244s" podCreationTimestamp="2025-12-27 20:28:04 +0000 UTC" firstStartedPulling="2025-12-27 20:28:04.515103437 +0000 UTC m=+6.183289284" lastFinishedPulling="2025-12-27 20:28:06.543044424 +0000 UTC m=+8.211230270" observedRunningTime="2025-12-27 20:28:07.496596218 +0000 UTC m=+9.164782073" watchObservedRunningTime="2025-12-27 20:28:07.647654244 +0000 UTC m=+9.315840098"
	Dec 27 20:28:08 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:08.511579    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-954154" containerName="kube-scheduler"
	Dec 27 20:28:09 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:09.923531    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-954154" containerName="kube-controller-manager"
	Dec 27 20:28:15 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:15.058272    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-954154" containerName="etcd"
	Dec 27 20:28:17 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:17.500247    1308 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 20:28:17 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:17.632638    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-954154" containerName="kube-apiserver"
	Dec 27 20:28:17 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:17.654356    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/94553f69-88cf-4e2c-94e4-99d2034bcc9a-kube-api-access-8fb2h\") pod \"coredns-7d764666f9-gtzdb\" (UID: \"94553f69-88cf-4e2c-94e4-99d2034bcc9a\") " pod="kube-system/coredns-7d764666f9-gtzdb"
	Dec 27 20:28:17 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:17.654414    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94553f69-88cf-4e2c-94e4-99d2034bcc9a-config-volume\") pod \"coredns-7d764666f9-gtzdb\" (UID: \"94553f69-88cf-4e2c-94e4-99d2034bcc9a\") " pod="kube-system/coredns-7d764666f9-gtzdb"
	Dec 27 20:28:17 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:17.654494    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e47d55de-82b6-47f6-b639-1c28182777af-tmp\") pod \"storage-provisioner\" (UID: \"e47d55de-82b6-47f6-b639-1c28182777af\") " pod="kube-system/storage-provisioner"
	Dec 27 20:28:17 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:17.654517    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf6vv\" (UniqueName: \"kubernetes.io/projected/e47d55de-82b6-47f6-b639-1c28182777af-kube-api-access-xf6vv\") pod \"storage-provisioner\" (UID: \"e47d55de-82b6-47f6-b639-1c28182777af\") " pod="kube-system/storage-provisioner"
	Dec 27 20:28:18 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:18.506489    1308 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gtzdb" containerName="coredns"
	Dec 27 20:28:18 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:18.516253    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.516235126 podStartE2EDuration="14.516235126s" podCreationTimestamp="2025-12-27 20:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:28:18.516151061 +0000 UTC m=+20.184336914" watchObservedRunningTime="2025-12-27 20:28:18.516235126 +0000 UTC m=+20.184420979"
	Dec 27 20:28:18 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:18.516687    1308 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-954154" containerName="kube-scheduler"
	Dec 27 20:28:18 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:18.525009    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-gtzdb" podStartSLOduration=14.524993458 podStartE2EDuration="14.524993458s" podCreationTimestamp="2025-12-27 20:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:28:18.52495147 +0000 UTC m=+20.193137324" watchObservedRunningTime="2025-12-27 20:28:18.524993458 +0000 UTC m=+20.193179312"
	Dec 27 20:28:19 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:19.508836    1308 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gtzdb" containerName="coredns"
	Dec 27 20:28:20 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:20.511504    1308 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gtzdb" containerName="coredns"
	Dec 27 20:28:20 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:20.672248    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrg8\" (UniqueName: \"kubernetes.io/projected/d25d862a-9040-4a22-935d-4e6d3eac79d1-kube-api-access-pdrg8\") pod \"busybox\" (UID: \"d25d862a-9040-4a22-935d-4e6d3eac79d1\") " pod="default/busybox"
	Dec 27 20:28:21 default-k8s-diff-port-954154 kubelet[1308]: I1227 20:28:21.523749    1308 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.941063506 podStartE2EDuration="1.523729989s" podCreationTimestamp="2025-12-27 20:28:20 +0000 UTC" firstStartedPulling="2025-12-27 20:28:20.870597988 +0000 UTC m=+22.538783832" lastFinishedPulling="2025-12-27 20:28:21.453264469 +0000 UTC m=+23.121450315" observedRunningTime="2025-12-27 20:28:21.523592233 +0000 UTC m=+23.191778106" watchObservedRunningTime="2025-12-27 20:28:21.523729989 +0000 UTC m=+23.191915843"
	Dec 27 20:28:26 default-k8s-diff-port-954154 kubelet[1308]: E1227 20:28:26.631271    1308 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39994->127.0.0.1:37095: write tcp 127.0.0.1:39994->127.0.0.1:37095: write: broken pipe
	
	
	==> storage-provisioner [c08728111bdbf05276f21b28330651dfe9161a3feb4c82b95807fa2f1a56ffdb] <==
	I1227 20:28:17.978811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:28:17.987499       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:28:17.987568       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:28:17.989876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:17.994426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:28:17.994563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:28:17.994701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-954154_ec4df1aa-f49a-4a52-ba1e-71b0e9e88837!
	I1227 20:28:17.994704       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6330280-d91e-46b9-b706-b20e6fbb3c3b", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-954154_ec4df1aa-f49a-4a52-ba1e-71b0e9e88837 became leader
	W1227 20:28:17.996546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:18.000550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:28:18.095669       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-954154_ec4df1aa-f49a-4a52-ba1e-71b0e9e88837!
	W1227 20:28:20.003974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:20.009544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:22.013382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:22.016968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:24.019698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:24.026068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:26.029490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:26.033616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:28.037108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:28.042888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-954154 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-762177 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-762177 --alsologtostderr -v=1: exit status 80 (1.813954482s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-762177 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:28:38.707350  337659 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:28:38.707590  337659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:38.707598  337659 out.go:374] Setting ErrFile to fd 2...
	I1227 20:28:38.707602  337659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:38.707791  337659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:28:38.708072  337659 out.go:368] Setting JSON to false
	I1227 20:28:38.708093  337659 mustload.go:66] Loading cluster: old-k8s-version-762177
	I1227 20:28:38.708463  337659 config.go:182] Loaded profile config "old-k8s-version-762177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:28:38.708838  337659 cli_runner.go:164] Run: docker container inspect old-k8s-version-762177 --format={{.State.Status}}
	I1227 20:28:38.726011  337659 host.go:66] Checking if "old-k8s-version-762177" exists ...
	I1227 20:28:38.726264  337659 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:38.784180  337659 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:88 SystemTime:2025-12-27 20:28:38.774273462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:38.784802  337659 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-762177 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:28:38.786432  337659 out.go:179] * Pausing node old-k8s-version-762177 ... 
	I1227 20:28:38.787377  337659 host.go:66] Checking if "old-k8s-version-762177" exists ...
	I1227 20:28:38.787618  337659 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:38.787658  337659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-762177
	I1227 20:28:38.806080  337659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/old-k8s-version-762177/id_rsa Username:docker}
	I1227 20:28:38.895331  337659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:38.907800  337659 pause.go:52] kubelet running: true
	I1227 20:28:38.907880  337659 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:28:39.062598  337659 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:28:39.062680  337659 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:28:39.139651  337659 cri.go:96] found id: "4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728"
	I1227 20:28:39.139674  337659 cri.go:96] found id: "eada932e924808faeee9c878de584d881778becf355c6f5504553b73b1fec7be"
	I1227 20:28:39.139679  337659 cri.go:96] found id: "c16a2397d5843507ec06cf68f4c83aa43e3b839822d403026842652a8823a42f"
	I1227 20:28:39.139684  337659 cri.go:96] found id: "1befa902a36e4002b02f43550e2e59ec920f410ef258b8ded14b8e67d83abd04"
	I1227 20:28:39.139687  337659 cri.go:96] found id: "01b59b513ae35e88da523ea015be42462d6f4599e4797daa7b6679fbbed4661e"
	I1227 20:28:39.139692  337659 cri.go:96] found id: "36cd7d1ea82f122132780da97e6256d4f13817d670d6667c1f16d860e3bbb36e"
	I1227 20:28:39.139696  337659 cri.go:96] found id: "982d6cdd0699931bca3b7344182b9ad5bb73733752da7f6b7e5a1efce4a6c161"
	I1227 20:28:39.139701  337659 cri.go:96] found id: "926fd25cbe2599a750ad591739ab8c2882aa901ea240849ff6d2acbb12f9a31c"
	I1227 20:28:39.139705  337659 cri.go:96] found id: "e105aca4bc5d8f2aab2fa4e7fd30105025db36d98d34b0f796be12a2e0458cfb"
	I1227 20:28:39.139713  337659 cri.go:96] found id: "8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e"
	I1227 20:28:39.139717  337659 cri.go:96] found id: "4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89"
	I1227 20:28:39.139722  337659 cri.go:96] found id: ""
	I1227 20:28:39.139771  337659 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:28:39.151987  337659 retry.go:84] will retry after 400ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:39Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:28:39.516517  337659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:39.531511  337659 pause.go:52] kubelet running: false
	I1227 20:28:39.531575  337659 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:28:39.686734  337659 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:28:39.686815  337659 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:28:39.754712  337659 cri.go:96] found id: "4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728"
	I1227 20:28:39.754734  337659 cri.go:96] found id: "eada932e924808faeee9c878de584d881778becf355c6f5504553b73b1fec7be"
	I1227 20:28:39.754738  337659 cri.go:96] found id: "c16a2397d5843507ec06cf68f4c83aa43e3b839822d403026842652a8823a42f"
	I1227 20:28:39.754741  337659 cri.go:96] found id: "1befa902a36e4002b02f43550e2e59ec920f410ef258b8ded14b8e67d83abd04"
	I1227 20:28:39.754743  337659 cri.go:96] found id: "01b59b513ae35e88da523ea015be42462d6f4599e4797daa7b6679fbbed4661e"
	I1227 20:28:39.754747  337659 cri.go:96] found id: "36cd7d1ea82f122132780da97e6256d4f13817d670d6667c1f16d860e3bbb36e"
	I1227 20:28:39.754751  337659 cri.go:96] found id: "982d6cdd0699931bca3b7344182b9ad5bb73733752da7f6b7e5a1efce4a6c161"
	I1227 20:28:39.754756  337659 cri.go:96] found id: "926fd25cbe2599a750ad591739ab8c2882aa901ea240849ff6d2acbb12f9a31c"
	I1227 20:28:39.754760  337659 cri.go:96] found id: "e105aca4bc5d8f2aab2fa4e7fd30105025db36d98d34b0f796be12a2e0458cfb"
	I1227 20:28:39.754767  337659 cri.go:96] found id: "8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e"
	I1227 20:28:39.754772  337659 cri.go:96] found id: "4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89"
	I1227 20:28:39.754778  337659 cri.go:96] found id: ""
	I1227 20:28:39.754835  337659 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:28:40.223158  337659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:40.236355  337659 pause.go:52] kubelet running: false
	I1227 20:28:40.236409  337659 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:28:40.381406  337659 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:28:40.381515  337659 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:28:40.446238  337659 cri.go:96] found id: "4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728"
	I1227 20:28:40.446259  337659 cri.go:96] found id: "eada932e924808faeee9c878de584d881778becf355c6f5504553b73b1fec7be"
	I1227 20:28:40.446265  337659 cri.go:96] found id: "c16a2397d5843507ec06cf68f4c83aa43e3b839822d403026842652a8823a42f"
	I1227 20:28:40.446270  337659 cri.go:96] found id: "1befa902a36e4002b02f43550e2e59ec920f410ef258b8ded14b8e67d83abd04"
	I1227 20:28:40.446275  337659 cri.go:96] found id: "01b59b513ae35e88da523ea015be42462d6f4599e4797daa7b6679fbbed4661e"
	I1227 20:28:40.446280  337659 cri.go:96] found id: "36cd7d1ea82f122132780da97e6256d4f13817d670d6667c1f16d860e3bbb36e"
	I1227 20:28:40.446283  337659 cri.go:96] found id: "982d6cdd0699931bca3b7344182b9ad5bb73733752da7f6b7e5a1efce4a6c161"
	I1227 20:28:40.446288  337659 cri.go:96] found id: "926fd25cbe2599a750ad591739ab8c2882aa901ea240849ff6d2acbb12f9a31c"
	I1227 20:28:40.446305  337659 cri.go:96] found id: "e105aca4bc5d8f2aab2fa4e7fd30105025db36d98d34b0f796be12a2e0458cfb"
	I1227 20:28:40.446317  337659 cri.go:96] found id: "8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e"
	I1227 20:28:40.446320  337659 cri.go:96] found id: "4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89"
	I1227 20:28:40.446323  337659 cri.go:96] found id: ""
	I1227 20:28:40.446364  337659 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:28:40.459376  337659 out.go:203] 
	W1227 20:28:40.460579  337659 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:28:40.460607  337659 out.go:285] * 
	* 
	W1227 20:28:40.462712  337659 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:28:40.463841  337659 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-762177 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-762177
helpers_test.go:244: (dbg) docker inspect old-k8s-version-762177:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444",
	        "Created": "2025-12-27T20:26:31.0677059Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:27:40.56875739Z",
	            "FinishedAt": "2025-12-27T20:27:39.262226981Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/hostname",
	        "HostsPath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/hosts",
	        "LogPath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444-json.log",
	        "Name": "/old-k8s-version-762177",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-762177:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-762177",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444",
	                "LowerDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-762177",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-762177/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-762177",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-762177",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-762177",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "eef53885e2a3b5a92f25bf3a5da7b7b0cdac4cb7e377b3cb17e2e56870c84360",
	            "SandboxKey": "/var/run/docker/netns/eef53885e2a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-762177": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bbffe05820d013a4fca696f72125227eec8cd0ee61afcb8620d53b5d2291b7b7",
	                    "EndpointID": "4690c75d81c70225f94b58fe949b5362fab52827825a7aa34ad9a64d499cdd02",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2e:74:e0:19:b0:b5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-762177",
	                        "b10dcfebdaaf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-762177 -n old-k8s-version-762177
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-762177 -n old-k8s-version-762177: exit status 2 (318.765927ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-762177 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-762177 logs -n 25: (1.264185741s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-436655 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo containerd config dump                                                                                                                                                                                                  │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo crio config                                                                                                                                                                                                             │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ delete  │ -p bridge-436655                                                                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ delete  │ -p disable-driver-mounts-541137                                                                                                                                                                                                               │ disable-driver-mounts-541137 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ stop    │ -p no-preload-014435 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p no-preload-014435 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p embed-certs-820583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-954154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-820583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-954154 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ image   │ old-k8s-version-762177 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ pause   │ -p old-k8s-version-762177 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:28:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:28:27.869622  334810 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:28:27.869949  334810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:27.869962  334810 out.go:374] Setting ErrFile to fd 2...
	I1227 20:28:27.869967  334810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:27.870223  334810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:28:27.870728  334810 out.go:368] Setting JSON to false
	I1227 20:28:27.872105  334810 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4257,"bootTime":1766863051,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:28:27.872172  334810 start.go:143] virtualization: kvm guest
	I1227 20:28:27.873965  334810 out.go:179] * [embed-certs-820583] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:28:27.875414  334810 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:28:27.875437  334810 notify.go:221] Checking for updates...
	I1227 20:28:27.877247  334810 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:28:27.878437  334810 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:27.879374  334810 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:28:27.880273  334810 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:28:27.881375  334810 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:28:27.884900  334810 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:27.885728  334810 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:28:27.910903  334810 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:28:27.911104  334810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:27.971781  334810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 20:28:27.961369743 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:27.971939  334810 docker.go:319] overlay module found
	I1227 20:28:27.973482  334810 out.go:179] * Using the docker driver based on existing profile
	I1227 20:28:27.974609  334810 start.go:309] selected driver: docker
	I1227 20:28:27.974629  334810 start.go:928] validating driver "docker" against &{Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:27.974735  334810 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:28:27.975559  334810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:28.038285  334810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 20:28:28.027690351 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:28.038663  334810 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:28:28.038713  334810 cni.go:84] Creating CNI manager for ""
	I1227 20:28:28.038790  334810 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:28.038846  334810 start.go:353] cluster config:
	{Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:28.040659  334810 out.go:179] * Starting "embed-certs-820583" primary control-plane node in "embed-certs-820583" cluster
	I1227 20:28:28.042430  334810 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:28:28.043639  334810 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:28:28.044629  334810 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:28.044657  334810 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:28:28.044666  334810 cache.go:65] Caching tarball of preloaded images
	I1227 20:28:28.044711  334810 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:28:28.044774  334810 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:28:28.044787  334810 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:28:28.044885  334810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/config.json ...
	I1227 20:28:28.066231  334810 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:28:28.066251  334810 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:28:28.066265  334810 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:28:28.066296  334810 start.go:360] acquireMachinesLock for embed-certs-820583: {Name:mk01eaa0328a4f3967965b40089a5a188a2ca888 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:28:28.066352  334810 start.go:364] duration metric: took 35.282µs to acquireMachinesLock for "embed-certs-820583"
	I1227 20:28:28.066367  334810 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:28:28.066374  334810 fix.go:54] fixHost starting: 
	I1227 20:28:28.066576  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:28.086021  334810 fix.go:112] recreateIfNeeded on embed-certs-820583: state=Stopped err=<nil>
	W1227 20:28:28.086079  334810 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 20:28:23.509601  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:26.009326  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:28.009634  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	I1227 20:28:28.087636  334810 out.go:252] * Restarting existing docker container for "embed-certs-820583" ...
	I1227 20:28:28.087746  334810 cli_runner.go:164] Run: docker start embed-certs-820583
	I1227 20:28:28.347507  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:28.367678  334810 kic.go:430] container "embed-certs-820583" state is running.
	I1227 20:28:28.368042  334810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-820583
	I1227 20:28:28.389020  334810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/config.json ...
	I1227 20:28:28.389289  334810 machine.go:94] provisionDockerMachine start ...
	I1227 20:28:28.389387  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:28.410031  334810 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:28.410349  334810 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1227 20:28:28.410367  334810 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:28:28.411185  334810 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50516->127.0.0.1:33118: read: connection reset by peer
	I1227 20:28:31.533873  334810 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-820583
	
	I1227 20:28:31.533901  334810 ubuntu.go:182] provisioning hostname "embed-certs-820583"
	I1227 20:28:31.533990  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:31.552057  334810 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:31.552270  334810 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1227 20:28:31.552290  334810 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-820583 && echo "embed-certs-820583" | sudo tee /etc/hostname
	I1227 20:28:31.685659  334810 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-820583
	
	I1227 20:28:31.685742  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:31.704279  334810 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:31.704498  334810 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1227 20:28:31.704515  334810 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-820583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-820583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-820583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:31.826164  334810 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:31.826190  334810 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:31.826219  334810 ubuntu.go:190] setting up certificates
	I1227 20:28:31.826236  334810 provision.go:84] configureAuth start
	I1227 20:28:31.826291  334810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-820583
	I1227 20:28:31.843777  334810 provision.go:143] copyHostCerts
	I1227 20:28:31.843826  334810 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:31.843840  334810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:31.843901  334810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:31.844012  334810 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:31.844023  334810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:31.844051  334810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:31.844113  334810 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:31.844121  334810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:31.844144  334810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:31.844191  334810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-820583 san=[127.0.0.1 192.168.76.2 embed-certs-820583 localhost minikube]
	I1227 20:28:31.948193  334810 provision.go:177] copyRemoteCerts
	I1227 20:28:31.948257  334810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:31.948304  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:31.966006  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.055748  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:32.073036  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:28:32.089202  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:28:32.106163  334810 provision.go:87] duration metric: took 279.906131ms to configureAuth
	I1227 20:28:32.106186  334810 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:32.106343  334810 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:32.106434  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.124139  334810 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:32.124367  334810 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1227 20:28:32.124389  334810 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:28:32.419316  334810 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:32.419342  334810 machine.go:97] duration metric: took 4.030032868s to provisionDockerMachine
	I1227 20:28:32.419356  334810 start.go:293] postStartSetup for "embed-certs-820583" (driver="docker")
	I1227 20:28:32.419369  334810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:32.419451  334810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:32.419503  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.438147  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.527971  334810 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:32.531299  334810 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:32.531324  334810 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:32.531334  334810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:32.531380  334810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:32.531470  334810 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:32.531616  334810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:32.538773  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:32.555609  334810 start.go:296] duration metric: took 136.241549ms for postStartSetup
	I1227 20:28:32.555668  334810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:32.555700  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.573043  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.660907  334810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:32.665337  334810 fix.go:56] duration metric: took 4.598958428s for fixHost
	I1227 20:28:32.665356  334810 start.go:83] releasing machines lock for "embed-certs-820583", held for 4.598995531s
	I1227 20:28:32.665409  334810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-820583
	I1227 20:28:32.683426  334810 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:32.683478  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.683497  334810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:32.683579  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.702843  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.703410  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.844134  334810 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:32.850997  334810 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	W1227 20:28:30.508760  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:32.509283  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	I1227 20:28:32.884461  334810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:32.889000  334810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:32.889073  334810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:32.896850  334810 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:28:32.896869  334810 start.go:496] detecting cgroup driver to use...
	I1227 20:28:32.896898  334810 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:32.896988  334810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:32.910536  334810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:32.922230  334810 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:32.922292  334810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:32.935655  334810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:32.947069  334810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:33.022743  334810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:33.101392  334810 docker.go:234] disabling docker service ...
	I1227 20:28:33.101465  334810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:33.114942  334810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:33.126563  334810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:33.210535  334810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:33.294275  334810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:33.306416  334810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:33.320038  334810 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:33.320084  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.328622  334810 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:33.328672  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.336992  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.345045  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.353552  334810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:33.361898  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.370368  334810 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.378203  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.386298  334810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:33.393268  334810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:33.400398  334810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:33.477431  334810 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:33.616365  334810 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:33.616443  334810 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:33.620412  334810 start.go:574] Will wait 60s for crictl version
	I1227 20:28:33.620463  334810 ssh_runner.go:195] Run: which crictl
	I1227 20:28:33.623944  334810 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:33.647125  334810 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:33.647205  334810 ssh_runner.go:195] Run: crio --version
	I1227 20:28:33.674615  334810 ssh_runner.go:195] Run: crio --version
	I1227 20:28:33.703195  334810 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:33.704251  334810 cli_runner.go:164] Run: docker network inspect embed-certs-820583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:33.721941  334810 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:33.726491  334810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:33.736959  334810 kubeadm.go:884] updating cluster {Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:33.737069  334810 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:33.737114  334810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:33.770436  334810 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:33.770456  334810 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:28:33.770512  334810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:33.796084  334810 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:33.796103  334810 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:33.796110  334810 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 20:28:33.796206  334810 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-820583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:33.796277  334810 ssh_runner.go:195] Run: crio config
	I1227 20:28:33.839567  334810 cni.go:84] Creating CNI manager for ""
	I1227 20:28:33.839588  334810 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:33.839604  334810 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:28:33.839626  334810 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-820583 NodeName:embed-certs-820583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:33.839784  334810 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-820583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:33.839843  334810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:33.848036  334810 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:33.848128  334810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:33.855522  334810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:28:33.867530  334810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:33.879334  334810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:28:33.891782  334810 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:33.895216  334810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:33.904553  334810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:33.980095  334810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:34.002072  334810 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583 for IP: 192.168.76.2
	I1227 20:28:34.002097  334810 certs.go:195] generating shared ca certs ...
	I1227 20:28:34.002115  334810 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:34.002247  334810 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:34.002293  334810 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:34.002303  334810 certs.go:257] generating profile certs ...
	I1227 20:28:34.002381  334810 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/client.key
	I1227 20:28:34.002440  334810 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key.da959220
	I1227 20:28:34.002479  334810 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.key
	I1227 20:28:34.002605  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:34.002642  334810 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:34.002648  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:34.002671  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:34.002697  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:34.002722  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:34.002763  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:34.003366  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:34.021779  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:34.040055  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:34.058850  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:34.082045  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 20:28:34.100535  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:28:34.117589  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:34.134162  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:28:34.150821  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:34.167384  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:34.184031  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:34.201678  334810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:34.213419  334810 ssh_runner.go:195] Run: openssl version
	I1227 20:28:34.219254  334810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:34.226410  334810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:34.233324  334810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:34.236958  334810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:34.237013  334810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:34.272308  334810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:34.279526  334810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:34.287291  334810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:34.295280  334810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:34.299228  334810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:34.299268  334810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:34.333791  334810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:34.341076  334810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:34.348022  334810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:34.355313  334810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:34.359567  334810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:34.359615  334810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:34.395170  334810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:34.402558  334810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:34.406175  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:28:34.440284  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:28:34.474375  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:28:34.517295  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:28:34.559835  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:28:34.604769  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:28:34.661556  334810 kubeadm.go:401] StartCluster: {Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:34.661652  334810 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:34.661708  334810 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:34.694031  334810 cri.go:96] found id: "e3219756543583e2bc1bc017b95edaeb3245180fdb7e19b51f610a80a7bdaf8f"
	I1227 20:28:34.694055  334810 cri.go:96] found id: "c920e23ef438912ca2e52bd7890591042bc74ed5ef30728cbbd1281035c27d3e"
	I1227 20:28:34.694062  334810 cri.go:96] found id: "7d0b7a7e858d75dabf988a9b76fa95b39147d055357b9b794f4486154f20ba5a"
	I1227 20:28:34.694067  334810 cri.go:96] found id: "383462ccad1518d79addd7cf9399f63bb733b06b0d9e1d6abe521c276e668b92"
	I1227 20:28:34.694070  334810 cri.go:96] found id: ""
	I1227 20:28:34.694113  334810 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:28:34.706566  334810 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:34Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:28:34.706662  334810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:34.714873  334810 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:28:34.714887  334810 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:28:34.714942  334810 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:28:34.722390  334810 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:28:34.723325  334810 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-820583" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:34.723986  334810 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-820583" cluster setting kubeconfig missing "embed-certs-820583" context setting]
	I1227 20:28:34.725000  334810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:34.726873  334810 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:28:34.734960  334810 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:28:34.734984  334810 kubeadm.go:602] duration metric: took 20.091058ms to restartPrimaryControlPlane
	I1227 20:28:34.734992  334810 kubeadm.go:403] duration metric: took 73.4475ms to StartCluster
	I1227 20:28:34.735009  334810 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:34.735063  334810 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:34.737210  334810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:34.737440  334810 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:34.737495  334810 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:28:34.737610  334810 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-820583"
	I1227 20:28:34.737628  334810 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-820583"
	W1227 20:28:34.737637  334810 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:28:34.737667  334810 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:34.737684  334810 addons.go:70] Setting dashboard=true in profile "embed-certs-820583"
	I1227 20:28:34.737696  334810 addons.go:239] Setting addon dashboard=true in "embed-certs-820583"
	W1227 20:28:34.737703  334810 addons.go:248] addon dashboard should already be in state true
	I1227 20:28:34.737727  334810 host.go:66] Checking if "embed-certs-820583" exists ...
	I1227 20:28:34.737714  334810 addons.go:70] Setting default-storageclass=true in profile "embed-certs-820583"
	I1227 20:28:34.737747  334810 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-820583"
	I1227 20:28:34.737674  334810 host.go:66] Checking if "embed-certs-820583" exists ...
	I1227 20:28:34.738112  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:34.738273  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:34.738303  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:34.740057  334810 out.go:179] * Verifying Kubernetes components...
	I1227 20:28:34.741162  334810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:34.764457  334810 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:28:34.764557  334810 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:28:34.764970  334810 addons.go:239] Setting addon default-storageclass=true in "embed-certs-820583"
	W1227 20:28:34.764992  334810 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:28:34.765019  334810 host.go:66] Checking if "embed-certs-820583" exists ...
	I1227 20:28:34.765487  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:34.765791  334810 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:34.765809  334810 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:28:34.765859  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:34.766866  334810 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:28:34.767993  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:28:34.768012  334810 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:28:34.768067  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:34.795949  334810 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:34.795974  334810 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:28:34.796033  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:34.800317  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:34.802056  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:34.821558  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:34.894478  334810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:34.900306  334810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:34.905154  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:28:34.905174  334810 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:28:34.909440  334810 node_ready.go:35] waiting up to 6m0s for node "embed-certs-820583" to be "Ready" ...
	I1227 20:28:34.919719  334810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:34.920263  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:28:34.920302  334810 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:28:34.933741  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:28:34.933762  334810 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:28:34.948336  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:28:34.948364  334810 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:28:34.964787  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:28:34.964813  334810 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:28:34.978251  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:28:34.978273  334810 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:28:34.990784  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:28:34.990802  334810 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:28:35.002643  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:28:35.002661  334810 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:28:35.015206  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:35.015221  334810 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:28:35.027091  334810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:36.016643  334810 node_ready.go:49] node "embed-certs-820583" is "Ready"
	I1227 20:28:36.016677  334810 node_ready.go:38] duration metric: took 1.107196827s for node "embed-certs-820583" to be "Ready" ...
	I1227 20:28:36.016694  334810 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:28:36.016751  334810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:28:36.595834  334810 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.695491743s)
	I1227 20:28:36.595865  334810 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.676123115s)
	I1227 20:28:36.596163  334810 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.569029601s)
	I1227 20:28:36.596338  334810 api_server.go:72] duration metric: took 1.858867918s to wait for apiserver process to appear ...
	I1227 20:28:36.596358  334810 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:28:36.596416  334810 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:28:36.598769  334810 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-820583 addons enable metrics-server
	
	I1227 20:28:36.603545  334810 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:36.603578  334810 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:36.612082  334810 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:28:36.613102  334810 addons.go:530] duration metric: took 1.875612937s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:28:37.097095  334810 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:28:37.102574  334810 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:37.102602  334810 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:37.597070  334810 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:28:37.601498  334810 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:28:37.602476  334810 api_server.go:141] control plane version: v1.35.0
	I1227 20:28:37.602499  334810 api_server.go:131] duration metric: took 1.006135405s to wait for apiserver health ...
	I1227 20:28:37.602507  334810 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:28:37.606107  334810 system_pods.go:59] 8 kube-system pods found
	I1227 20:28:37.606141  334810 system_pods.go:61] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:37.606153  334810 system_pods.go:61] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:37.606166  334810 system_pods.go:61] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:28:37.606176  334810 system_pods.go:61] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:37.606188  334810 system_pods.go:61] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:37.606198  334810 system_pods.go:61] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:28:37.606213  334810 system_pods.go:61] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:37.606217  334810 system_pods.go:61] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Running
	I1227 20:28:37.606228  334810 system_pods.go:74] duration metric: took 3.714942ms to wait for pod list to return data ...
	I1227 20:28:37.606242  334810 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:28:37.608668  334810 default_sa.go:45] found service account: "default"
	I1227 20:28:37.608687  334810 default_sa.go:55] duration metric: took 2.438397ms for default service account to be created ...
	I1227 20:28:37.608695  334810 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:28:37.611095  334810 system_pods.go:86] 8 kube-system pods found
	I1227 20:28:37.611116  334810 system_pods.go:89] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:37.611124  334810 system_pods.go:89] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:37.611129  334810 system_pods.go:89] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:28:37.611134  334810 system_pods.go:89] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:37.611140  334810 system_pods.go:89] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:37.611147  334810 system_pods.go:89] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:28:37.611153  334810 system_pods.go:89] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:37.611159  334810 system_pods.go:89] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Running
	I1227 20:28:37.611165  334810 system_pods.go:126] duration metric: took 2.465719ms to wait for k8s-apps to be running ...
	I1227 20:28:37.611174  334810 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:28:37.611215  334810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:37.624220  334810 system_svc.go:56] duration metric: took 13.036393ms WaitForService to wait for kubelet
	I1227 20:28:37.624254  334810 kubeadm.go:587] duration metric: took 2.886787601s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:28:37.624280  334810 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:28:37.627161  334810 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:28:37.627184  334810 node_conditions.go:123] node cpu capacity is 8
	I1227 20:28:37.627198  334810 node_conditions.go:105] duration metric: took 2.909019ms to run NodePressure ...
	I1227 20:28:37.627210  334810 start.go:242] waiting for startup goroutines ...
	I1227 20:28:37.627220  334810 start.go:247] waiting for cluster config update ...
	I1227 20:28:37.627238  334810 start.go:256] writing updated cluster config ...
	I1227 20:28:37.627532  334810 ssh_runner.go:195] Run: rm -f paused
	I1227 20:28:37.631273  334810 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:37.635134  334810 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nvnjg" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:28:34.509527  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:36.511411  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 20:28:10 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:10.626796765Z" level=info msg="Created container 4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt/kubernetes-dashboard" id=55a6a58f-8329-47ac-844c-51a8628986af name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:10 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:10.6275507Z" level=info msg="Starting container: 4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89" id=8cd9eca8-63ba-4eab-a4b5-e02c3ae01265 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:10 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:10.62976497Z" level=info msg="Started container" PID=1726 containerID=4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt/kubernetes-dashboard id=8cd9eca8-63ba-4eab-a4b5-e02c3ae01265 name=/runtime.v1.RuntimeService/StartContainer sandboxID=577d0db9ee1767ecfb4c4ebd936d4a8ed12c9d0056145275d8d0acdadea667b1
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.754722553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ca92dea4-e789-41c3-bc35-a54dfbf1d7f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.756070928Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2bbf944f-4713-40b6-9136-58cbfae4ecea name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.757471025Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cd88ad31-771b-4ce2-b7fe-c082946ef2d3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.757605143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.762039235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.76222265Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7bdf2b446efc9cd75e5ea33a5d822b253305efc7152582b68210529df50caad2/merged/etc/passwd: no such file or directory"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.762260359Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7bdf2b446efc9cd75e5ea33a5d822b253305efc7152582b68210529df50caad2/merged/etc/group: no such file or directory"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.762505837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.792593375Z" level=info msg="Created container 4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728: kube-system/storage-provisioner/storage-provisioner" id=cd88ad31-771b-4ce2-b7fe-c082946ef2d3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.793181312Z" level=info msg="Starting container: 4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728" id=8e260854-2c7a-46b8-9593-10374a431328 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.794899402Z" level=info msg="Started container" PID=1751 containerID=4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728 description=kube-system/storage-provisioner/storage-provisioner id=8e260854-2c7a-46b8-9593-10374a431328 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a464d96a728e31f809f7c57eb0b24ee3beef9d32f9374b3edc2ee80f9bae265
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.650568787Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=55284258-919b-45fe-8aff-1de6946ff5e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.6516016Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=91265805-8d72-404c-b5c1-27e1566f67c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.652600845Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk/dashboard-metrics-scraper" id=ab1f7d3e-158b-4813-92f2-e2a6ba3e6557 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.652707899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.660358001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.661045011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.68665531Z" level=info msg="Created container 8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk/dashboard-metrics-scraper" id=ab1f7d3e-158b-4813-92f2-e2a6ba3e6557 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.687209498Z" level=info msg="Starting container: 8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e" id=94449aa2-5eba-4e35-b1b9-81d443ccbf23 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.688877222Z" level=info msg="Started container" PID=1771 containerID=8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk/dashboard-metrics-scraper id=94449aa2-5eba-4e35-b1b9-81d443ccbf23 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69e8fb5c02d28632d2b6b6436df46a0f900300dce7be90769c0d1ca9bf17d262
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.769176165Z" level=info msg="Removing container: c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50" id=96865773-1802-4620-b844-a1daad137163 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.778508761Z" level=info msg="Removed container c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk/dashboard-metrics-scraper" id=96865773-1802-4620-b844-a1daad137163 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8f2b2729bf6e3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   69e8fb5c02d28       dashboard-metrics-scraper-5f989dc9cf-r69tk       kubernetes-dashboard
	4dd452d9fc891       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   3a464d96a728e       storage-provisioner                              kube-system
	4c243642f0c51       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   30 seconds ago      Running             kubernetes-dashboard        0                   577d0db9ee176       kubernetes-dashboard-8694d4445c-bfhwt            kubernetes-dashboard
	eada932e92480       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   6d0a1d23bb99e       coredns-5dd5756b68-lklgt                         kube-system
	42bc1dbae7bab       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   5fcf8060b142b       busybox                                          default
	c16a2397d5843       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   bd87afe9e4ae9       kindnet-89clv                                    kube-system
	1befa902a36e4       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   6c17bb82afc23       kube-proxy-99q8t                                 kube-system
	01b59b513ae35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   3a464d96a728e       storage-provisioner                              kube-system
	36cd7d1ea82f1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   c0df3a943ca9c       etcd-old-k8s-version-762177                      kube-system
	982d6cdd06999       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   1c1a7d59102b6       kube-apiserver-old-k8s-version-762177            kube-system
	926fd25cbe259       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   fc71c75f5c02f       kube-controller-manager-old-k8s-version-762177   kube-system
	e105aca4bc5d8       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   bd85dd460c22a       kube-scheduler-old-k8s-version-762177            kube-system
	
	
	==> coredns [eada932e924808faeee9c878de584d881778becf355c6f5504553b73b1fec7be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56886 - 49040 "HINFO IN 6118087216062035664.7615395043752397806. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073279988s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-762177
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-762177
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=old-k8s-version-762177
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_26_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:26:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-762177
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:28:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:28:20 +0000   Sat, 27 Dec 2025 20:26:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:28:20 +0000   Sat, 27 Dec 2025 20:26:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:28:20 +0000   Sat, 27 Dec 2025 20:26:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:28:20 +0000   Sat, 27 Dec 2025 20:27:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-762177
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                81258586-7f74-4e22-8b3b-4eafa1fc89ef
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-lklgt                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-762177                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-89clv                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-762177             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-762177    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-99q8t                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-762177             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-r69tk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bfhwt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-762177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           105s               node-controller  Node old-k8s-version-762177 event: Registered Node old-k8s-version-762177 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-762177 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-762177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-762177 event: Registered Node old-k8s-version-762177 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [36cd7d1ea82f122132780da97e6256d4f13817d670d6667c1f16d860e3bbb36e] <==
	{"level":"info","ts":"2025-12-27T20:27:48.194963Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:27:48.195104Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-27T20:27:48.195388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-27T20:27:48.195546Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-27T20:27:48.195675Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:27:48.195723Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:27:48.198057Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:27:48.198127Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:27:48.198197Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:27:48.198347Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:27:48.198405Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:27:49.388344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:49.388398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:49.388422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:49.388435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:27:49.38844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T20:27:49.388448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:27:49.388458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T20:27:49.389396Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-762177 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:27:49.389415Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:27:49.389417Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:27:49.389581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:27:49.389608Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:27:49.390591Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:27:49.390596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 20:28:41 up  1:11,  0 user,  load average: 3.50, 3.19, 2.25
	Linux old-k8s-version-762177 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c16a2397d5843507ec06cf68f4c83aa43e3b839822d403026842652a8823a42f] <==
	I1227 20:27:51.245528       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:27:51.245980       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1227 20:27:51.246200       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:27:51.246222       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:27:51.246240       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:27:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:27:51.540356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:27:51.540384       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:27:51.540395       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:27:51.540532       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:27:51.841080       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:27:51.841105       1 metrics.go:72] Registering metrics
	I1227 20:27:51.841155       1 controller.go:711] "Syncing nftables rules"
	I1227 20:28:01.540066       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:01.540139       1 main.go:301] handling current node
	I1227 20:28:11.540103       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:11.540166       1 main.go:301] handling current node
	I1227 20:28:21.540657       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:21.540701       1 main.go:301] handling current node
	I1227 20:28:31.544010       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:31.544047       1 main.go:301] handling current node
	I1227 20:28:41.547022       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:41.547064       1 main.go:301] handling current node
	
	
	==> kube-apiserver [982d6cdd0699931bca3b7344182b9ad5bb73733752da7f6b7e5a1efce4a6c161] <==
	I1227 20:27:50.309497       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I1227 20:27:50.372993       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:27:50.409782       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 20:27:50.409786       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 20:27:50.409859       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 20:27:50.409875       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 20:27:50.409931       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 20:27:50.409996       1 aggregator.go:166] initial CRD sync complete...
	I1227 20:27:50.410039       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 20:27:50.410064       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:27:50.410091       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:27:50.410538       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 20:27:50.410569       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:27:50.448662       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 20:27:51.315265       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 20:27:51.330266       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 20:27:51.370578       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 20:27:51.389348       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:27:51.401275       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:27:51.408700       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 20:27:51.445337       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.45.80"}
	I1227 20:27:51.458002       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.229.43"}
	I1227 20:28:03.194840       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 20:28:03.244400       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:28:03.395326       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [926fd25cbe2599a750ad591739ab8c2882aa901ea240849ff6d2acbb12f9a31c] <==
	I1227 20:28:03.351763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.604µs"
	I1227 20:28:03.398212       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1227 20:28:03.399447       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1227 20:28:03.405740       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-r69tk"
	I1227 20:28:03.405950       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-bfhwt"
	I1227 20:28:03.409990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.078095ms"
	I1227 20:28:03.411812       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:28:03.414233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.201764ms"
	I1227 20:28:03.420531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.488229ms"
	I1227 20:28:03.420615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.653µs"
	I1227 20:28:03.422711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.416165ms"
	I1227 20:28:03.422859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.243µs"
	I1227 20:28:03.429896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.877µs"
	I1227 20:28:03.439664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.642µs"
	I1227 20:28:03.464770       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:28:03.464793       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 20:28:06.725722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="163.914µs"
	I1227 20:28:07.735254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="113.141µs"
	I1227 20:28:08.735740       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.06µs"
	I1227 20:28:10.746490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.266522ms"
	I1227 20:28:10.746617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.179µs"
	I1227 20:28:25.061735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.613041ms"
	I1227 20:28:25.061820       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.854µs"
	I1227 20:28:26.779396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.408µs"
	I1227 20:28:33.728069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.293µs"
	
	
	==> kube-proxy [1befa902a36e4002b02f43550e2e59ec920f410ef258b8ded14b8e67d83abd04] <==
	I1227 20:27:51.097683       1 server_others.go:69] "Using iptables proxy"
	I1227 20:27:51.108216       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1227 20:27:51.131273       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:27:51.133667       1 server_others.go:152] "Using iptables Proxier"
	I1227 20:27:51.133712       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 20:27:51.133721       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 20:27:51.133757       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 20:27:51.134073       1 server.go:846] "Version info" version="v1.28.0"
	I1227 20:27:51.134089       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:27:51.134843       1 config.go:188] "Starting service config controller"
	I1227 20:27:51.134865       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 20:27:51.134942       1 config.go:315] "Starting node config controller"
	I1227 20:27:51.134953       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 20:27:51.135028       1 config.go:97] "Starting endpoint slice config controller"
	I1227 20:27:51.135055       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 20:27:51.235040       1 shared_informer.go:318] Caches are synced for node config
	I1227 20:27:51.235089       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 20:27:51.235133       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e105aca4bc5d8f2aab2fa4e7fd30105025db36d98d34b0f796be12a2e0458cfb] <==
	I1227 20:27:48.502892       1 serving.go:348] Generated self-signed cert in-memory
	W1227 20:27:50.359434       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:27:50.359480       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:27:50.359497       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:27:50.359509       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:27:50.383275       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1227 20:27:50.383306       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:27:50.384847       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:27:50.384885       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 20:27:50.385936       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1227 20:27:50.389167       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 20:27:50.485347       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.417710     726 topology_manager.go:215] "Topology Admit Handler" podUID="32c3377f-ae5d-4e77-ae87-bbf26d43e921" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-bfhwt"
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.516423     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/32c3377f-ae5d-4e77-ae87-bbf26d43e921-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-bfhwt\" (UID: \"32c3377f-ae5d-4e77-ae87-bbf26d43e921\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt"
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.516482     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74g2b\" (UniqueName: \"kubernetes.io/projected/5780871f-fc47-4eab-adde-a2c29affa13a-kube-api-access-74g2b\") pod \"dashboard-metrics-scraper-5f989dc9cf-r69tk\" (UID: \"5780871f-fc47-4eab-adde-a2c29affa13a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk"
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.516661     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8m59\" (UniqueName: \"kubernetes.io/projected/32c3377f-ae5d-4e77-ae87-bbf26d43e921-kube-api-access-k8m59\") pod \"kubernetes-dashboard-8694d4445c-bfhwt\" (UID: \"32c3377f-ae5d-4e77-ae87-bbf26d43e921\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt"
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.516756     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5780871f-fc47-4eab-adde-a2c29affa13a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-r69tk\" (UID: \"5780871f-fc47-4eab-adde-a2c29affa13a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk"
	Dec 27 20:28:06 old-k8s-version-762177 kubelet[726]: I1227 20:28:06.715298     726 scope.go:117] "RemoveContainer" containerID="dc2af008c830c2f9bf9bacc51f7d7b558820f46d5727868b0d7b44b754332d2e"
	Dec 27 20:28:07 old-k8s-version-762177 kubelet[726]: I1227 20:28:07.719372     726 scope.go:117] "RemoveContainer" containerID="dc2af008c830c2f9bf9bacc51f7d7b558820f46d5727868b0d7b44b754332d2e"
	Dec 27 20:28:07 old-k8s-version-762177 kubelet[726]: I1227 20:28:07.719631     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:07 old-k8s-version-762177 kubelet[726]: E1227 20:28:07.720025     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:08 old-k8s-version-762177 kubelet[726]: I1227 20:28:08.723373     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:08 old-k8s-version-762177 kubelet[726]: E1227 20:28:08.723715     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:10 old-k8s-version-762177 kubelet[726]: I1227 20:28:10.741876     726 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt" podStartSLOduration=0.911740992 podCreationTimestamp="2025-12-27 20:28:03 +0000 UTC" firstStartedPulling="2025-12-27 20:28:03.739382275 +0000 UTC m=+16.186502779" lastFinishedPulling="2025-12-27 20:28:10.569450312 +0000 UTC m=+23.016570821" observedRunningTime="2025-12-27 20:28:10.741311789 +0000 UTC m=+23.188432309" watchObservedRunningTime="2025-12-27 20:28:10.741809034 +0000 UTC m=+23.188929544"
	Dec 27 20:28:13 old-k8s-version-762177 kubelet[726]: I1227 20:28:13.717262     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:13 old-k8s-version-762177 kubelet[726]: E1227 20:28:13.717591     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:21 old-k8s-version-762177 kubelet[726]: I1227 20:28:21.753803     726 scope.go:117] "RemoveContainer" containerID="01b59b513ae35e88da523ea015be42462d6f4599e4797daa7b6679fbbed4661e"
	Dec 27 20:28:26 old-k8s-version-762177 kubelet[726]: I1227 20:28:26.649986     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:26 old-k8s-version-762177 kubelet[726]: I1227 20:28:26.768036     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:26 old-k8s-version-762177 kubelet[726]: I1227 20:28:26.768297     726 scope.go:117] "RemoveContainer" containerID="8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e"
	Dec 27 20:28:26 old-k8s-version-762177 kubelet[726]: E1227 20:28:26.768632     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:33 old-k8s-version-762177 kubelet[726]: I1227 20:28:33.717623     726 scope.go:117] "RemoveContainer" containerID="8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e"
	Dec 27 20:28:33 old-k8s-version-762177 kubelet[726]: E1227 20:28:33.717899     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:39 old-k8s-version-762177 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:28:39 old-k8s-version-762177 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:28:39 old-k8s-version-762177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:28:39 old-k8s-version-762177 systemd[1]: kubelet.service: Consumed 1.434s CPU time.
	
	
	==> kubernetes-dashboard [4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89] <==
	2025/12/27 20:28:10 Using namespace: kubernetes-dashboard
	2025/12/27 20:28:10 Using in-cluster config to connect to apiserver
	2025/12/27 20:28:10 Using secret token for csrf signing
	2025/12/27 20:28:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:28:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:28:10 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 20:28:10 Generating JWE encryption key
	2025/12/27 20:28:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:28:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:28:10 Initializing JWE encryption key from synchronized object
	2025/12/27 20:28:10 Creating in-cluster Sidecar client
	2025/12/27 20:28:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:10 Serving insecurely on HTTP port: 9090
	2025/12/27 20:28:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:10 Starting overwatch
	
	
	==> storage-provisioner [01b59b513ae35e88da523ea015be42462d6f4599e4797daa7b6679fbbed4661e] <==
	I1227 20:27:51.067154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:28:21.070423       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728] <==
	I1227 20:28:21.807039       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:28:21.814480       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:28:21.814515       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 20:28:39.209129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:28:39.209236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e92b6e7a-16bc-4c05-885c-17e1f4060299", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-762177_60b6e83f-d084-4e6d-8402-bbe818f45f5b became leader
	I1227 20:28:39.209299       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762177_60b6e83f-d084-4e6d-8402-bbe818f45f5b!
	I1227 20:28:39.310028       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762177_60b6e83f-d084-4e6d-8402-bbe818f45f5b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-762177 -n old-k8s-version-762177
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-762177 -n old-k8s-version-762177: exit status 2 (399.574482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-762177 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-762177
helpers_test.go:244: (dbg) docker inspect old-k8s-version-762177:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444",
	        "Created": "2025-12-27T20:26:31.0677059Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:27:40.56875739Z",
	            "FinishedAt": "2025-12-27T20:27:39.262226981Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/hostname",
	        "HostsPath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/hosts",
	        "LogPath": "/var/lib/docker/containers/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444/b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444-json.log",
	        "Name": "/old-k8s-version-762177",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-762177:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-762177",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b10dcfebdaaf022555cb070af7e83f24bcff8c91713c44d481ba68802b155444",
	                "LowerDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/929535ff224193aaf6235bf2344829b22539dbfd7b23129e5d3c2b26b91f334f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-762177",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-762177/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-762177",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-762177",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-762177",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "eef53885e2a3b5a92f25bf3a5da7b7b0cdac4cb7e377b3cb17e2e56870c84360",
	            "SandboxKey": "/var/run/docker/netns/eef53885e2a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-762177": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bbffe05820d013a4fca696f72125227eec8cd0ee61afcb8620d53b5d2291b7b7",
	                    "EndpointID": "4690c75d81c70225f94b58fe949b5362fab52827825a7aa34ad9a64d499cdd02",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2e:74:e0:19:b0:b5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-762177",
	                        "b10dcfebdaaf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-762177 -n old-k8s-version-762177
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-762177 -n old-k8s-version-762177: exit status 2 (393.04367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-762177 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-762177 logs -n 25: (1.299426841s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-436655 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ ssh     │ -p bridge-436655 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo containerd config dump                                                                                                                                                                                                  │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo crio config                                                                                                                                                                                                             │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ delete  │ -p bridge-436655                                                                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ delete  │ -p disable-driver-mounts-541137                                                                                                                                                                                                               │ disable-driver-mounts-541137 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ stop    │ -p no-preload-014435 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p no-preload-014435 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p embed-certs-820583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-954154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-820583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-954154 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ image   │ old-k8s-version-762177 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ pause   │ -p old-k8s-version-762177 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:28:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:28:27.869622  334810 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:28:27.869949  334810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:27.869962  334810 out.go:374] Setting ErrFile to fd 2...
	I1227 20:28:27.869967  334810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:27.870223  334810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:28:27.870728  334810 out.go:368] Setting JSON to false
	I1227 20:28:27.872105  334810 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4257,"bootTime":1766863051,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:28:27.872172  334810 start.go:143] virtualization: kvm guest
	I1227 20:28:27.873965  334810 out.go:179] * [embed-certs-820583] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:28:27.875414  334810 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:28:27.875437  334810 notify.go:221] Checking for updates...
	I1227 20:28:27.877247  334810 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:28:27.878437  334810 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:27.879374  334810 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:28:27.880273  334810 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:28:27.881375  334810 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:28:27.884900  334810 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:27.885728  334810 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:28:27.910903  334810 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:28:27.911104  334810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:27.971781  334810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 20:28:27.961369743 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:27.971939  334810 docker.go:319] overlay module found
	I1227 20:28:27.973482  334810 out.go:179] * Using the docker driver based on existing profile
	I1227 20:28:27.974609  334810 start.go:309] selected driver: docker
	I1227 20:28:27.974629  334810 start.go:928] validating driver "docker" against &{Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:27.974735  334810 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:28:27.975559  334810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:28.038285  334810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-27 20:28:28.027690351 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:28.038663  334810 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:28:28.038713  334810 cni.go:84] Creating CNI manager for ""
	I1227 20:28:28.038790  334810 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:28.038846  334810 start.go:353] cluster config:
	{Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:28.040659  334810 out.go:179] * Starting "embed-certs-820583" primary control-plane node in "embed-certs-820583" cluster
	I1227 20:28:28.042430  334810 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:28:28.043639  334810 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:28:28.044629  334810 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:28.044657  334810 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:28:28.044666  334810 cache.go:65] Caching tarball of preloaded images
	I1227 20:28:28.044711  334810 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:28:28.044774  334810 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:28:28.044787  334810 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:28:28.044885  334810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/config.json ...
	I1227 20:28:28.066231  334810 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:28:28.066251  334810 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:28:28.066265  334810 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:28:28.066296  334810 start.go:360] acquireMachinesLock for embed-certs-820583: {Name:mk01eaa0328a4f3967965b40089a5a188a2ca888 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:28:28.066352  334810 start.go:364] duration metric: took 35.282µs to acquireMachinesLock for "embed-certs-820583"
	I1227 20:28:28.066367  334810 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:28:28.066374  334810 fix.go:54] fixHost starting: 
	I1227 20:28:28.066576  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:28.086021  334810 fix.go:112] recreateIfNeeded on embed-certs-820583: state=Stopped err=<nil>
	W1227 20:28:28.086079  334810 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 20:28:23.509601  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:26.009326  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:28.009634  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	I1227 20:28:28.087636  334810 out.go:252] * Restarting existing docker container for "embed-certs-820583" ...
	I1227 20:28:28.087746  334810 cli_runner.go:164] Run: docker start embed-certs-820583
	I1227 20:28:28.347507  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:28.367678  334810 kic.go:430] container "embed-certs-820583" state is running.
	I1227 20:28:28.368042  334810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-820583
	I1227 20:28:28.389020  334810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/config.json ...
	I1227 20:28:28.389289  334810 machine.go:94] provisionDockerMachine start ...
	I1227 20:28:28.389387  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:28.410031  334810 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:28.410349  334810 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1227 20:28:28.410367  334810 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:28:28.411185  334810 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50516->127.0.0.1:33118: read: connection reset by peer
	I1227 20:28:31.533873  334810 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-820583
	
	I1227 20:28:31.533901  334810 ubuntu.go:182] provisioning hostname "embed-certs-820583"
	I1227 20:28:31.533990  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:31.552057  334810 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:31.552270  334810 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1227 20:28:31.552290  334810 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-820583 && echo "embed-certs-820583" | sudo tee /etc/hostname
	I1227 20:28:31.685659  334810 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-820583
	
	I1227 20:28:31.685742  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:31.704279  334810 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:31.704498  334810 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1227 20:28:31.704515  334810 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-820583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-820583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-820583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:31.826164  334810 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:31.826190  334810 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:31.826219  334810 ubuntu.go:190] setting up certificates
	I1227 20:28:31.826236  334810 provision.go:84] configureAuth start
	I1227 20:28:31.826291  334810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-820583
	I1227 20:28:31.843777  334810 provision.go:143] copyHostCerts
	I1227 20:28:31.843826  334810 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:31.843840  334810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:31.843901  334810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:31.844012  334810 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:31.844023  334810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:31.844051  334810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:31.844113  334810 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:31.844121  334810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:31.844144  334810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:31.844191  334810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-820583 san=[127.0.0.1 192.168.76.2 embed-certs-820583 localhost minikube]
	I1227 20:28:31.948193  334810 provision.go:177] copyRemoteCerts
	I1227 20:28:31.948257  334810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:31.948304  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:31.966006  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.055748  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:32.073036  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:28:32.089202  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:28:32.106163  334810 provision.go:87] duration metric: took 279.906131ms to configureAuth
	I1227 20:28:32.106186  334810 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:32.106343  334810 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:32.106434  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.124139  334810 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:32.124367  334810 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1227 20:28:32.124389  334810 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:28:32.419316  334810 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:32.419342  334810 machine.go:97] duration metric: took 4.030032868s to provisionDockerMachine
	I1227 20:28:32.419356  334810 start.go:293] postStartSetup for "embed-certs-820583" (driver="docker")
	I1227 20:28:32.419369  334810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:32.419451  334810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:32.419503  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.438147  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.527971  334810 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:32.531299  334810 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:32.531324  334810 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:32.531334  334810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:32.531380  334810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:32.531470  334810 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:32.531616  334810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:32.538773  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:32.555609  334810 start.go:296] duration metric: took 136.241549ms for postStartSetup
	I1227 20:28:32.555668  334810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:32.555700  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.573043  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.660907  334810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:32.665337  334810 fix.go:56] duration metric: took 4.598958428s for fixHost
	I1227 20:28:32.665356  334810 start.go:83] releasing machines lock for "embed-certs-820583", held for 4.598995531s
	I1227 20:28:32.665409  334810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-820583
	I1227 20:28:32.683426  334810 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:32.683478  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.683497  334810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:32.683579  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:32.702843  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.703410  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:32.844134  334810 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:32.850997  334810 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	W1227 20:28:30.508760  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:32.509283  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	I1227 20:28:32.884461  334810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:32.889000  334810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:32.889073  334810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:32.896850  334810 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:28:32.896869  334810 start.go:496] detecting cgroup driver to use...
	I1227 20:28:32.896898  334810 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:32.896988  334810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:32.910536  334810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:32.922230  334810 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:32.922292  334810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:32.935655  334810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:32.947069  334810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:33.022743  334810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:33.101392  334810 docker.go:234] disabling docker service ...
	I1227 20:28:33.101465  334810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:33.114942  334810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:33.126563  334810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:33.210535  334810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:33.294275  334810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:33.306416  334810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:33.320038  334810 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:33.320084  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.328622  334810 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:33.328672  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.336992  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.345045  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.353552  334810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:33.361898  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.370368  334810 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.378203  334810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:33.386298  334810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:33.393268  334810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:33.400398  334810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:33.477431  334810 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:33.616365  334810 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:33.616443  334810 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:33.620412  334810 start.go:574] Will wait 60s for crictl version
	I1227 20:28:33.620463  334810 ssh_runner.go:195] Run: which crictl
	I1227 20:28:33.623944  334810 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:33.647125  334810 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:33.647205  334810 ssh_runner.go:195] Run: crio --version
	I1227 20:28:33.674615  334810 ssh_runner.go:195] Run: crio --version
	I1227 20:28:33.703195  334810 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:33.704251  334810 cli_runner.go:164] Run: docker network inspect embed-certs-820583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:33.721941  334810 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:33.726491  334810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:33.736959  334810 kubeadm.go:884] updating cluster {Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:33.737069  334810 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:33.737114  334810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:33.770436  334810 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:33.770456  334810 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:28:33.770512  334810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:33.796084  334810 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:33.796103  334810 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:33.796110  334810 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 20:28:33.796206  334810 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-820583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:33.796277  334810 ssh_runner.go:195] Run: crio config
	I1227 20:28:33.839567  334810 cni.go:84] Creating CNI manager for ""
	I1227 20:28:33.839588  334810 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:33.839604  334810 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:28:33.839626  334810 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-820583 NodeName:embed-certs-820583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:33.839784  334810 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-820583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:33.839843  334810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:33.848036  334810 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:33.848128  334810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:33.855522  334810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:28:33.867530  334810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:33.879334  334810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:28:33.891782  334810 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:33.895216  334810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:33.904553  334810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:33.980095  334810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:34.002072  334810 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583 for IP: 192.168.76.2
	I1227 20:28:34.002097  334810 certs.go:195] generating shared ca certs ...
	I1227 20:28:34.002115  334810 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:34.002247  334810 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:34.002293  334810 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:34.002303  334810 certs.go:257] generating profile certs ...
	I1227 20:28:34.002381  334810 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/client.key
	I1227 20:28:34.002440  334810 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key.da959220
	I1227 20:28:34.002479  334810 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.key
	I1227 20:28:34.002605  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:34.002642  334810 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:34.002648  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:34.002671  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:34.002697  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:34.002722  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:34.002763  334810 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:34.003366  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:34.021779  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:34.040055  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:34.058850  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:34.082045  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 20:28:34.100535  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:28:34.117589  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:34.134162  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/embed-certs-820583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:28:34.150821  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:34.167384  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:34.184031  334810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:34.201678  334810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:34.213419  334810 ssh_runner.go:195] Run: openssl version
	I1227 20:28:34.219254  334810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:34.226410  334810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:34.233324  334810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:34.236958  334810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:34.237013  334810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:34.272308  334810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:34.279526  334810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:34.287291  334810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:34.295280  334810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:34.299228  334810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:34.299268  334810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:34.333791  334810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:34.341076  334810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:34.348022  334810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:34.355313  334810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:34.359567  334810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:34.359615  334810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:34.395170  334810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:34.402558  334810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:34.406175  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:28:34.440284  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:28:34.474375  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:28:34.517295  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:28:34.559835  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:28:34.604769  334810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:28:34.661556  334810 kubeadm.go:401] StartCluster: {Name:embed-certs-820583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-820583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:34.661652  334810 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:34.661708  334810 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:34.694031  334810 cri.go:96] found id: "e3219756543583e2bc1bc017b95edaeb3245180fdb7e19b51f610a80a7bdaf8f"
	I1227 20:28:34.694055  334810 cri.go:96] found id: "c920e23ef438912ca2e52bd7890591042bc74ed5ef30728cbbd1281035c27d3e"
	I1227 20:28:34.694062  334810 cri.go:96] found id: "7d0b7a7e858d75dabf988a9b76fa95b39147d055357b9b794f4486154f20ba5a"
	I1227 20:28:34.694067  334810 cri.go:96] found id: "383462ccad1518d79addd7cf9399f63bb733b06b0d9e1d6abe521c276e668b92"
	I1227 20:28:34.694070  334810 cri.go:96] found id: ""
	I1227 20:28:34.694113  334810 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:28:34.706566  334810 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:34Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:28:34.706662  334810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:34.714873  334810 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:28:34.714887  334810 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:28:34.714942  334810 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:28:34.722390  334810 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:28:34.723325  334810 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-820583" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:34.723986  334810 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-820583" cluster setting kubeconfig missing "embed-certs-820583" context setting]
	I1227 20:28:34.725000  334810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:34.726873  334810 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:28:34.734960  334810 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:28:34.734984  334810 kubeadm.go:602] duration metric: took 20.091058ms to restartPrimaryControlPlane
	I1227 20:28:34.734992  334810 kubeadm.go:403] duration metric: took 73.4475ms to StartCluster
	I1227 20:28:34.735009  334810 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:34.735063  334810 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:34.737210  334810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:34.737440  334810 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:34.737495  334810 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:28:34.737610  334810 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-820583"
	I1227 20:28:34.737628  334810 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-820583"
	W1227 20:28:34.737637  334810 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:28:34.737667  334810 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:34.737684  334810 addons.go:70] Setting dashboard=true in profile "embed-certs-820583"
	I1227 20:28:34.737696  334810 addons.go:239] Setting addon dashboard=true in "embed-certs-820583"
	W1227 20:28:34.737703  334810 addons.go:248] addon dashboard should already be in state true
	I1227 20:28:34.737727  334810 host.go:66] Checking if "embed-certs-820583" exists ...
	I1227 20:28:34.737714  334810 addons.go:70] Setting default-storageclass=true in profile "embed-certs-820583"
	I1227 20:28:34.737747  334810 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-820583"
	I1227 20:28:34.737674  334810 host.go:66] Checking if "embed-certs-820583" exists ...
	I1227 20:28:34.738112  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:34.738273  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:34.738303  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:34.740057  334810 out.go:179] * Verifying Kubernetes components...
	I1227 20:28:34.741162  334810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:34.764457  334810 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:28:34.764557  334810 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:28:34.764970  334810 addons.go:239] Setting addon default-storageclass=true in "embed-certs-820583"
	W1227 20:28:34.764992  334810 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:28:34.765019  334810 host.go:66] Checking if "embed-certs-820583" exists ...
	I1227 20:28:34.765487  334810 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:28:34.765791  334810 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:34.765809  334810 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:28:34.765859  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:34.766866  334810 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:28:34.767993  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:28:34.768012  334810 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:28:34.768067  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:34.795949  334810 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:34.795974  334810 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:28:34.796033  334810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:28:34.800317  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:34.802056  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:34.821558  334810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:28:34.894478  334810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:34.900306  334810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:34.905154  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:28:34.905174  334810 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:28:34.909440  334810 node_ready.go:35] waiting up to 6m0s for node "embed-certs-820583" to be "Ready" ...
	I1227 20:28:34.919719  334810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:34.920263  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:28:34.920302  334810 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:28:34.933741  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:28:34.933762  334810 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:28:34.948336  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:28:34.948364  334810 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:28:34.964787  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:28:34.964813  334810 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:28:34.978251  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:28:34.978273  334810 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:28:34.990784  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:28:34.990802  334810 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:28:35.002643  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:28:35.002661  334810 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:28:35.015206  334810 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:35.015221  334810 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:28:35.027091  334810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:36.016643  334810 node_ready.go:49] node "embed-certs-820583" is "Ready"
	I1227 20:28:36.016677  334810 node_ready.go:38] duration metric: took 1.107196827s for node "embed-certs-820583" to be "Ready" ...
	I1227 20:28:36.016694  334810 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:28:36.016751  334810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:28:36.595834  334810 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.695491743s)
	I1227 20:28:36.595865  334810 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.676123115s)
	I1227 20:28:36.596163  334810 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.569029601s)
	I1227 20:28:36.596338  334810 api_server.go:72] duration metric: took 1.858867918s to wait for apiserver process to appear ...
	I1227 20:28:36.596358  334810 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:28:36.596416  334810 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:28:36.598769  334810 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-820583 addons enable metrics-server
	
	I1227 20:28:36.603545  334810 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:36.603578  334810 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:36.612082  334810 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:28:36.613102  334810 addons.go:530] duration metric: took 1.875612937s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:28:37.097095  334810 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:28:37.102574  334810 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:37.102602  334810 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:37.597070  334810 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:28:37.601498  334810 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:28:37.602476  334810 api_server.go:141] control plane version: v1.35.0
	I1227 20:28:37.602499  334810 api_server.go:131] duration metric: took 1.006135405s to wait for apiserver health ...
	I1227 20:28:37.602507  334810 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:28:37.606107  334810 system_pods.go:59] 8 kube-system pods found
	I1227 20:28:37.606141  334810 system_pods.go:61] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:37.606153  334810 system_pods.go:61] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:37.606166  334810 system_pods.go:61] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:28:37.606176  334810 system_pods.go:61] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:37.606188  334810 system_pods.go:61] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:37.606198  334810 system_pods.go:61] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:28:37.606213  334810 system_pods.go:61] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:37.606217  334810 system_pods.go:61] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Running
	I1227 20:28:37.606228  334810 system_pods.go:74] duration metric: took 3.714942ms to wait for pod list to return data ...
	I1227 20:28:37.606242  334810 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:28:37.608668  334810 default_sa.go:45] found service account: "default"
	I1227 20:28:37.608687  334810 default_sa.go:55] duration metric: took 2.438397ms for default service account to be created ...
	I1227 20:28:37.608695  334810 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:28:37.611095  334810 system_pods.go:86] 8 kube-system pods found
	I1227 20:28:37.611116  334810 system_pods.go:89] "coredns-7d764666f9-nvnjg" [43ffce66-ea7f-41f4-aa47-ce8860d08b61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:37.611124  334810 system_pods.go:89] "etcd-embed-certs-820583" [f38c956c-3a86-4b12-8fd8-6984329006a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:37.611129  334810 system_pods.go:89] "kindnet-6d59t" [4b85db12-4b05-4f39-af95-5c3a6aa7c0ad] Running
	I1227 20:28:37.611134  334810 system_pods.go:89] "kube-apiserver-embed-certs-820583" [c8a09b40-722a-4ac2-a872-5ffaa9b12c59] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:37.611140  334810 system_pods.go:89] "kube-controller-manager-embed-certs-820583" [30e23f74-cef5-47d5-b851-361af993b344] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:37.611147  334810 system_pods.go:89] "kube-proxy-srwxn" [8d08af7a-1a92-4a9d-b68e-c816e37f2d26] Running
	I1227 20:28:37.611153  334810 system_pods.go:89] "kube-scheduler-embed-certs-820583" [bf51cd6b-c054-4544-b104-6c19cd0d5491] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:37.611159  334810 system_pods.go:89] "storage-provisioner" [c02473c8-cc31-4a36-8823-cea2e486cdba] Running
	I1227 20:28:37.611165  334810 system_pods.go:126] duration metric: took 2.465719ms to wait for k8s-apps to be running ...
	I1227 20:28:37.611174  334810 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:28:37.611215  334810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:37.624220  334810 system_svc.go:56] duration metric: took 13.036393ms WaitForService to wait for kubelet
	I1227 20:28:37.624254  334810 kubeadm.go:587] duration metric: took 2.886787601s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:28:37.624280  334810 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:28:37.627161  334810 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:28:37.627184  334810 node_conditions.go:123] node cpu capacity is 8
	I1227 20:28:37.627198  334810 node_conditions.go:105] duration metric: took 2.909019ms to run NodePressure ...
	I1227 20:28:37.627210  334810 start.go:242] waiting for startup goroutines ...
	I1227 20:28:37.627220  334810 start.go:247] waiting for cluster config update ...
	I1227 20:28:37.627238  334810 start.go:256] writing updated cluster config ...
	I1227 20:28:37.627532  334810 ssh_runner.go:195] Run: rm -f paused
	I1227 20:28:37.631273  334810 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:37.635134  334810 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nvnjg" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:28:34.509527  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:36.511411  329454 pod_ready.go:104] pod "coredns-7d764666f9-nvrq6" is not "Ready", error: <nil>
	W1227 20:28:39.640394  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:28:41.642212  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 20:28:10 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:10.626796765Z" level=info msg="Created container 4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt/kubernetes-dashboard" id=55a6a58f-8329-47ac-844c-51a8628986af name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:10 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:10.6275507Z" level=info msg="Starting container: 4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89" id=8cd9eca8-63ba-4eab-a4b5-e02c3ae01265 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:10 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:10.62976497Z" level=info msg="Started container" PID=1726 containerID=4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt/kubernetes-dashboard id=8cd9eca8-63ba-4eab-a4b5-e02c3ae01265 name=/runtime.v1.RuntimeService/StartContainer sandboxID=577d0db9ee1767ecfb4c4ebd936d4a8ed12c9d0056145275d8d0acdadea667b1
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.754722553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ca92dea4-e789-41c3-bc35-a54dfbf1d7f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.756070928Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2bbf944f-4713-40b6-9136-58cbfae4ecea name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.757471025Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cd88ad31-771b-4ce2-b7fe-c082946ef2d3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.757605143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.762039235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.76222265Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7bdf2b446efc9cd75e5ea33a5d822b253305efc7152582b68210529df50caad2/merged/etc/passwd: no such file or directory"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.762260359Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7bdf2b446efc9cd75e5ea33a5d822b253305efc7152582b68210529df50caad2/merged/etc/group: no such file or directory"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.762505837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.792593375Z" level=info msg="Created container 4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728: kube-system/storage-provisioner/storage-provisioner" id=cd88ad31-771b-4ce2-b7fe-c082946ef2d3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.793181312Z" level=info msg="Starting container: 4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728" id=8e260854-2c7a-46b8-9593-10374a431328 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:21 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:21.794899402Z" level=info msg="Started container" PID=1751 containerID=4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728 description=kube-system/storage-provisioner/storage-provisioner id=8e260854-2c7a-46b8-9593-10374a431328 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a464d96a728e31f809f7c57eb0b24ee3beef9d32f9374b3edc2ee80f9bae265
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.650568787Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=55284258-919b-45fe-8aff-1de6946ff5e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.6516016Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=91265805-8d72-404c-b5c1-27e1566f67c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.652600845Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk/dashboard-metrics-scraper" id=ab1f7d3e-158b-4813-92f2-e2a6ba3e6557 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.652707899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.660358001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.661045011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.68665531Z" level=info msg="Created container 8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk/dashboard-metrics-scraper" id=ab1f7d3e-158b-4813-92f2-e2a6ba3e6557 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.687209498Z" level=info msg="Starting container: 8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e" id=94449aa2-5eba-4e35-b1b9-81d443ccbf23 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.688877222Z" level=info msg="Started container" PID=1771 containerID=8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk/dashboard-metrics-scraper id=94449aa2-5eba-4e35-b1b9-81d443ccbf23 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69e8fb5c02d28632d2b6b6436df46a0f900300dce7be90769c0d1ca9bf17d262
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.769176165Z" level=info msg="Removing container: c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50" id=96865773-1802-4620-b844-a1daad137163 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:26 old-k8s-version-762177 crio[562]: time="2025-12-27T20:28:26.778508761Z" level=info msg="Removed container c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk/dashboard-metrics-scraper" id=96865773-1802-4620-b844-a1daad137163 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8f2b2729bf6e3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   69e8fb5c02d28       dashboard-metrics-scraper-5f989dc9cf-r69tk       kubernetes-dashboard
	4dd452d9fc891       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   3a464d96a728e       storage-provisioner                              kube-system
	4c243642f0c51       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   577d0db9ee176       kubernetes-dashboard-8694d4445c-bfhwt            kubernetes-dashboard
	eada932e92480       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   6d0a1d23bb99e       coredns-5dd5756b68-lklgt                         kube-system
	42bc1dbae7bab       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   5fcf8060b142b       busybox                                          default
	c16a2397d5843       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 0                   bd87afe9e4ae9       kindnet-89clv                                    kube-system
	1befa902a36e4       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   6c17bb82afc23       kube-proxy-99q8t                                 kube-system
	01b59b513ae35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   3a464d96a728e       storage-provisioner                              kube-system
	36cd7d1ea82f1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   c0df3a943ca9c       etcd-old-k8s-version-762177                      kube-system
	982d6cdd06999       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   1c1a7d59102b6       kube-apiserver-old-k8s-version-762177            kube-system
	926fd25cbe259       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   fc71c75f5c02f       kube-controller-manager-old-k8s-version-762177   kube-system
	e105aca4bc5d8       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   bd85dd460c22a       kube-scheduler-old-k8s-version-762177            kube-system
	
	
	==> coredns [eada932e924808faeee9c878de584d881778becf355c6f5504553b73b1fec7be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56886 - 49040 "HINFO IN 6118087216062035664.7615395043752397806. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073279988s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-762177
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-762177
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=old-k8s-version-762177
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_26_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:26:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-762177
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:28:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:28:20 +0000   Sat, 27 Dec 2025 20:26:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:28:20 +0000   Sat, 27 Dec 2025 20:26:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:28:20 +0000   Sat, 27 Dec 2025 20:26:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:28:20 +0000   Sat, 27 Dec 2025 20:27:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-762177
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                81258586-7f74-4e22-8b3b-4eafa1fc89ef
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-lklgt                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-762177                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-89clv                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-762177             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-762177    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-99q8t                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-762177             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-r69tk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bfhwt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-762177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           107s               node-controller  Node old-k8s-version-762177 event: Registered Node old-k8s-version-762177 in Controller
	  Normal  NodeReady                93s                kubelet          Node old-k8s-version-762177 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node old-k8s-version-762177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node old-k8s-version-762177 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-762177 event: Registered Node old-k8s-version-762177 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [36cd7d1ea82f122132780da97e6256d4f13817d670d6667c1f16d860e3bbb36e] <==
	{"level":"info","ts":"2025-12-27T20:27:48.194963Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:27:48.195104Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-27T20:27:48.195388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-27T20:27:48.195546Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-27T20:27:48.195675Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:27:48.195723Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:27:48.198057Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:27:48.198127Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:27:48.198197Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:27:48.198347Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:27:48.198405Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:27:49.388344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:49.388398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:49.388422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T20:27:49.388435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:27:49.38844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T20:27:49.388448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:27:49.388458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T20:27:49.389396Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-762177 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:27:49.389415Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:27:49.389417Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:27:49.389581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:27:49.389608Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:27:49.390591Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:27:49.390596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 20:28:43 up  1:11,  0 user,  load average: 3.46, 3.19, 2.25
	Linux old-k8s-version-762177 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c16a2397d5843507ec06cf68f4c83aa43e3b839822d403026842652a8823a42f] <==
	I1227 20:27:51.245528       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:27:51.245980       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1227 20:27:51.246200       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:27:51.246222       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:27:51.246240       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:27:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:27:51.540356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:27:51.540384       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:27:51.540395       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:27:51.540532       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:27:51.841080       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:27:51.841105       1 metrics.go:72] Registering metrics
	I1227 20:27:51.841155       1 controller.go:711] "Syncing nftables rules"
	I1227 20:28:01.540066       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:01.540139       1 main.go:301] handling current node
	I1227 20:28:11.540103       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:11.540166       1 main.go:301] handling current node
	I1227 20:28:21.540657       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:21.540701       1 main.go:301] handling current node
	I1227 20:28:31.544010       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:31.544047       1 main.go:301] handling current node
	I1227 20:28:41.547022       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1227 20:28:41.547064       1 main.go:301] handling current node
	
	
	==> kube-apiserver [982d6cdd0699931bca3b7344182b9ad5bb73733752da7f6b7e5a1efce4a6c161] <==
	I1227 20:27:50.309497       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I1227 20:27:50.372993       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:27:50.409782       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 20:27:50.409786       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 20:27:50.409859       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 20:27:50.409875       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 20:27:50.409931       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 20:27:50.409996       1 aggregator.go:166] initial CRD sync complete...
	I1227 20:27:50.410039       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 20:27:50.410064       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:27:50.410091       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:27:50.410538       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 20:27:50.410569       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:27:50.448662       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 20:27:51.315265       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 20:27:51.330266       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 20:27:51.370578       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 20:27:51.389348       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:27:51.401275       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:27:51.408700       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 20:27:51.445337       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.45.80"}
	I1227 20:27:51.458002       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.229.43"}
	I1227 20:28:03.194840       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 20:28:03.244400       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:28:03.395326       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [926fd25cbe2599a750ad591739ab8c2882aa901ea240849ff6d2acbb12f9a31c] <==
	I1227 20:28:03.351763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.604µs"
	I1227 20:28:03.398212       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1227 20:28:03.399447       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1227 20:28:03.405740       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-r69tk"
	I1227 20:28:03.405950       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-bfhwt"
	I1227 20:28:03.409990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.078095ms"
	I1227 20:28:03.411812       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:28:03.414233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.201764ms"
	I1227 20:28:03.420531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.488229ms"
	I1227 20:28:03.420615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.653µs"
	I1227 20:28:03.422711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.416165ms"
	I1227 20:28:03.422859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.243µs"
	I1227 20:28:03.429896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.877µs"
	I1227 20:28:03.439664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.642µs"
	I1227 20:28:03.464770       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:28:03.464793       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 20:28:06.725722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="163.914µs"
	I1227 20:28:07.735254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="113.141µs"
	I1227 20:28:08.735740       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.06µs"
	I1227 20:28:10.746490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.266522ms"
	I1227 20:28:10.746617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.179µs"
	I1227 20:28:25.061735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.613041ms"
	I1227 20:28:25.061820       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.854µs"
	I1227 20:28:26.779396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.408µs"
	I1227 20:28:33.728069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.293µs"
	
	
	==> kube-proxy [1befa902a36e4002b02f43550e2e59ec920f410ef258b8ded14b8e67d83abd04] <==
	I1227 20:27:51.097683       1 server_others.go:69] "Using iptables proxy"
	I1227 20:27:51.108216       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1227 20:27:51.131273       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:27:51.133667       1 server_others.go:152] "Using iptables Proxier"
	I1227 20:27:51.133712       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 20:27:51.133721       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 20:27:51.133757       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 20:27:51.134073       1 server.go:846] "Version info" version="v1.28.0"
	I1227 20:27:51.134089       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:27:51.134843       1 config.go:188] "Starting service config controller"
	I1227 20:27:51.134865       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 20:27:51.134942       1 config.go:315] "Starting node config controller"
	I1227 20:27:51.134953       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 20:27:51.135028       1 config.go:97] "Starting endpoint slice config controller"
	I1227 20:27:51.135055       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 20:27:51.235040       1 shared_informer.go:318] Caches are synced for node config
	I1227 20:27:51.235089       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 20:27:51.235133       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e105aca4bc5d8f2aab2fa4e7fd30105025db36d98d34b0f796be12a2e0458cfb] <==
	I1227 20:27:48.502892       1 serving.go:348] Generated self-signed cert in-memory
	W1227 20:27:50.359434       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:27:50.359480       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:27:50.359497       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:27:50.359509       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:27:50.383275       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1227 20:27:50.383306       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:27:50.384847       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:27:50.384885       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 20:27:50.385936       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1227 20:27:50.389167       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 20:27:50.485347       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.417710     726 topology_manager.go:215] "Topology Admit Handler" podUID="32c3377f-ae5d-4e77-ae87-bbf26d43e921" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-bfhwt"
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.516423     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/32c3377f-ae5d-4e77-ae87-bbf26d43e921-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-bfhwt\" (UID: \"32c3377f-ae5d-4e77-ae87-bbf26d43e921\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt"
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.516482     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74g2b\" (UniqueName: \"kubernetes.io/projected/5780871f-fc47-4eab-adde-a2c29affa13a-kube-api-access-74g2b\") pod \"dashboard-metrics-scraper-5f989dc9cf-r69tk\" (UID: \"5780871f-fc47-4eab-adde-a2c29affa13a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk"
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.516661     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8m59\" (UniqueName: \"kubernetes.io/projected/32c3377f-ae5d-4e77-ae87-bbf26d43e921-kube-api-access-k8m59\") pod \"kubernetes-dashboard-8694d4445c-bfhwt\" (UID: \"32c3377f-ae5d-4e77-ae87-bbf26d43e921\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt"
	Dec 27 20:28:03 old-k8s-version-762177 kubelet[726]: I1227 20:28:03.516756     726 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5780871f-fc47-4eab-adde-a2c29affa13a-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-r69tk\" (UID: \"5780871f-fc47-4eab-adde-a2c29affa13a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk"
	Dec 27 20:28:06 old-k8s-version-762177 kubelet[726]: I1227 20:28:06.715298     726 scope.go:117] "RemoveContainer" containerID="dc2af008c830c2f9bf9bacc51f7d7b558820f46d5727868b0d7b44b754332d2e"
	Dec 27 20:28:07 old-k8s-version-762177 kubelet[726]: I1227 20:28:07.719372     726 scope.go:117] "RemoveContainer" containerID="dc2af008c830c2f9bf9bacc51f7d7b558820f46d5727868b0d7b44b754332d2e"
	Dec 27 20:28:07 old-k8s-version-762177 kubelet[726]: I1227 20:28:07.719631     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:07 old-k8s-version-762177 kubelet[726]: E1227 20:28:07.720025     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:08 old-k8s-version-762177 kubelet[726]: I1227 20:28:08.723373     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:08 old-k8s-version-762177 kubelet[726]: E1227 20:28:08.723715     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:10 old-k8s-version-762177 kubelet[726]: I1227 20:28:10.741876     726 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bfhwt" podStartSLOduration=0.911740992 podCreationTimestamp="2025-12-27 20:28:03 +0000 UTC" firstStartedPulling="2025-12-27 20:28:03.739382275 +0000 UTC m=+16.186502779" lastFinishedPulling="2025-12-27 20:28:10.569450312 +0000 UTC m=+23.016570821" observedRunningTime="2025-12-27 20:28:10.741311789 +0000 UTC m=+23.188432309" watchObservedRunningTime="2025-12-27 20:28:10.741809034 +0000 UTC m=+23.188929544"
	Dec 27 20:28:13 old-k8s-version-762177 kubelet[726]: I1227 20:28:13.717262     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:13 old-k8s-version-762177 kubelet[726]: E1227 20:28:13.717591     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:21 old-k8s-version-762177 kubelet[726]: I1227 20:28:21.753803     726 scope.go:117] "RemoveContainer" containerID="01b59b513ae35e88da523ea015be42462d6f4599e4797daa7b6679fbbed4661e"
	Dec 27 20:28:26 old-k8s-version-762177 kubelet[726]: I1227 20:28:26.649986     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:26 old-k8s-version-762177 kubelet[726]: I1227 20:28:26.768036     726 scope.go:117] "RemoveContainer" containerID="c4b56d90f83c5cd5ee96fffc53b5f1c0ae3d5acfffcfa46aa4c01f3f1c45dc50"
	Dec 27 20:28:26 old-k8s-version-762177 kubelet[726]: I1227 20:28:26.768297     726 scope.go:117] "RemoveContainer" containerID="8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e"
	Dec 27 20:28:26 old-k8s-version-762177 kubelet[726]: E1227 20:28:26.768632     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:33 old-k8s-version-762177 kubelet[726]: I1227 20:28:33.717623     726 scope.go:117] "RemoveContainer" containerID="8f2b2729bf6e3b49564f3fc1e36113740430da0d8d8e061686d3ab36bcfa129e"
	Dec 27 20:28:33 old-k8s-version-762177 kubelet[726]: E1227 20:28:33.717899     726 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r69tk_kubernetes-dashboard(5780871f-fc47-4eab-adde-a2c29affa13a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r69tk" podUID="5780871f-fc47-4eab-adde-a2c29affa13a"
	Dec 27 20:28:39 old-k8s-version-762177 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:28:39 old-k8s-version-762177 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:28:39 old-k8s-version-762177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:28:39 old-k8s-version-762177 systemd[1]: kubelet.service: Consumed 1.434s CPU time.
	
	
	==> kubernetes-dashboard [4c243642f0c51a53301c6d94addb80ead9db50be599b731bdabb9de6f1835c89] <==
	2025/12/27 20:28:10 Starting overwatch
	2025/12/27 20:28:10 Using namespace: kubernetes-dashboard
	2025/12/27 20:28:10 Using in-cluster config to connect to apiserver
	2025/12/27 20:28:10 Using secret token for csrf signing
	2025/12/27 20:28:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:28:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:28:10 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 20:28:10 Generating JWE encryption key
	2025/12/27 20:28:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:28:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:28:10 Initializing JWE encryption key from synchronized object
	2025/12/27 20:28:10 Creating in-cluster Sidecar client
	2025/12/27 20:28:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:10 Serving insecurely on HTTP port: 9090
	2025/12/27 20:28:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [01b59b513ae35e88da523ea015be42462d6f4599e4797daa7b6679fbbed4661e] <==
	I1227 20:27:51.067154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:28:21.070423       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4dd452d9fc8918c8129346b7f3bc1aeee2b171e3ced6307cdf17e6c10db58728] <==
	I1227 20:28:21.807039       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:28:21.814480       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:28:21.814515       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 20:28:39.209129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:28:39.209236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e92b6e7a-16bc-4c05-885c-17e1f4060299", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-762177_60b6e83f-d084-4e6d-8402-bbe818f45f5b became leader
	I1227 20:28:39.209299       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762177_60b6e83f-d084-4e6d-8402-bbe818f45f5b!
	I1227 20:28:39.310028       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-762177_60b6e83f-d084-4e6d-8402-bbe818f45f5b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-762177 -n old-k8s-version-762177
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-762177 -n old-k8s-version-762177: exit status 2 (393.251369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-762177 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-014435 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-014435 --alsologtostderr -v=1: exit status 80 (2.579611356s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-014435 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:29:01.289252  344572 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:01.289643  344572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:01.289652  344572 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:01.289658  344572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:01.290131  344572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:01.290502  344572 out.go:368] Setting JSON to false
	I1227 20:29:01.290523  344572 mustload.go:66] Loading cluster: no-preload-014435
	I1227 20:29:01.291056  344572 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:01.291831  344572 cli_runner.go:164] Run: docker container inspect no-preload-014435 --format={{.State.Status}}
	I1227 20:29:01.319666  344572 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:29:01.320016  344572 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:01.402207  344572 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-27 20:29:01.380004188 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:01.403049  344572 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-014435 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:29:01.404878  344572 out.go:179] * Pausing node no-preload-014435 ... 
	I1227 20:29:01.409675  344572 host.go:66] Checking if "no-preload-014435" exists ...
	I1227 20:29:01.410048  344572 ssh_runner.go:195] Run: systemctl --version
	I1227 20:29:01.410095  344572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-014435
	I1227 20:29:01.431168  344572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/no-preload-014435/id_rsa Username:docker}
	I1227 20:29:01.524978  344572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:01.549193  344572 pause.go:52] kubelet running: true
	I1227 20:29:01.549303  344572 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:01.716562  344572 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:01.716652  344572 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:01.784141  344572 cri.go:96] found id: "c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4"
	I1227 20:29:01.784178  344572 cri.go:96] found id: "b9b509ee6a53f3f461dc21f5d20cb2ed21b39cc41369daf17da2bf1e93644530"
	I1227 20:29:01.784184  344572 cri.go:96] found id: "e95e2820359521d157e6543a261cc1ecc9b5fcfaec66bf820cd24c038ec2d52f"
	I1227 20:29:01.784201  344572 cri.go:96] found id: "5e16382753ebc8b7372b1654da2876be48c0fdc56c35110cbf3c811d7a0f6ed0"
	I1227 20:29:01.784209  344572 cri.go:96] found id: "b9a65772953dd8fa2257dfb6c558509023e860f7da1e48e5f6bfae91bae57d63"
	I1227 20:29:01.784215  344572 cri.go:96] found id: "ccbaca54231342c2e05e7f6f39601d2b1cde71a9e688363aa99cd9f16b7b740b"
	I1227 20:29:01.784220  344572 cri.go:96] found id: "7e08e2ae41d9f45611fbcd90fd0c42176961f4b3b41557748ee9cae592b1a0c6"
	I1227 20:29:01.784229  344572 cri.go:96] found id: "a18c71b2140b36f980a1c87567071c8701b3e5f8aa4ab2f6fb52beb4656e9bd2"
	I1227 20:29:01.784233  344572 cri.go:96] found id: "455bcec8a175a162715bf84ffdfb0f031b741ff15312c1ed41956c3dafb97b6e"
	I1227 20:29:01.784246  344572 cri.go:96] found id: "a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1"
	I1227 20:29:01.784254  344572 cri.go:96] found id: "0ca092b3ce3ee7591a69a1325fcce1cc752a14da32abb9827a163b163f63c990"
	I1227 20:29:01.784259  344572 cri.go:96] found id: ""
	I1227 20:29:01.784310  344572 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:01.796739  344572 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:01Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:29:02.084181  344572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:02.101061  344572 pause.go:52] kubelet running: false
	I1227 20:29:02.101127  344572 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:02.295246  344572 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:02.295351  344572 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:02.380066  344572 cri.go:96] found id: "c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4"
	I1227 20:29:02.380101  344572 cri.go:96] found id: "b9b509ee6a53f3f461dc21f5d20cb2ed21b39cc41369daf17da2bf1e93644530"
	I1227 20:29:02.380108  344572 cri.go:96] found id: "e95e2820359521d157e6543a261cc1ecc9b5fcfaec66bf820cd24c038ec2d52f"
	I1227 20:29:02.380113  344572 cri.go:96] found id: "5e16382753ebc8b7372b1654da2876be48c0fdc56c35110cbf3c811d7a0f6ed0"
	I1227 20:29:02.380117  344572 cri.go:96] found id: "b9a65772953dd8fa2257dfb6c558509023e860f7da1e48e5f6bfae91bae57d63"
	I1227 20:29:02.380122  344572 cri.go:96] found id: "ccbaca54231342c2e05e7f6f39601d2b1cde71a9e688363aa99cd9f16b7b740b"
	I1227 20:29:02.380126  344572 cri.go:96] found id: "7e08e2ae41d9f45611fbcd90fd0c42176961f4b3b41557748ee9cae592b1a0c6"
	I1227 20:29:02.380130  344572 cri.go:96] found id: "a18c71b2140b36f980a1c87567071c8701b3e5f8aa4ab2f6fb52beb4656e9bd2"
	I1227 20:29:02.380135  344572 cri.go:96] found id: "455bcec8a175a162715bf84ffdfb0f031b741ff15312c1ed41956c3dafb97b6e"
	I1227 20:29:02.380142  344572 cri.go:96] found id: "a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1"
	I1227 20:29:02.380147  344572 cri.go:96] found id: "0ca092b3ce3ee7591a69a1325fcce1cc752a14da32abb9827a163b163f63c990"
	I1227 20:29:02.380152  344572 cri.go:96] found id: ""
	I1227 20:29:02.380211  344572 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:02.733398  344572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:02.753348  344572 pause.go:52] kubelet running: false
	I1227 20:29:02.753420  344572 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:02.973336  344572 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:02.973461  344572 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:03.054775  344572 cri.go:96] found id: "c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4"
	I1227 20:29:03.054794  344572 cri.go:96] found id: "b9b509ee6a53f3f461dc21f5d20cb2ed21b39cc41369daf17da2bf1e93644530"
	I1227 20:29:03.054798  344572 cri.go:96] found id: "e95e2820359521d157e6543a261cc1ecc9b5fcfaec66bf820cd24c038ec2d52f"
	I1227 20:29:03.054801  344572 cri.go:96] found id: "5e16382753ebc8b7372b1654da2876be48c0fdc56c35110cbf3c811d7a0f6ed0"
	I1227 20:29:03.054804  344572 cri.go:96] found id: "b9a65772953dd8fa2257dfb6c558509023e860f7da1e48e5f6bfae91bae57d63"
	I1227 20:29:03.054808  344572 cri.go:96] found id: "ccbaca54231342c2e05e7f6f39601d2b1cde71a9e688363aa99cd9f16b7b740b"
	I1227 20:29:03.054812  344572 cri.go:96] found id: "7e08e2ae41d9f45611fbcd90fd0c42176961f4b3b41557748ee9cae592b1a0c6"
	I1227 20:29:03.054816  344572 cri.go:96] found id: "a18c71b2140b36f980a1c87567071c8701b3e5f8aa4ab2f6fb52beb4656e9bd2"
	I1227 20:29:03.054821  344572 cri.go:96] found id: "455bcec8a175a162715bf84ffdfb0f031b741ff15312c1ed41956c3dafb97b6e"
	I1227 20:29:03.054828  344572 cri.go:96] found id: "a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1"
	I1227 20:29:03.054832  344572 cri.go:96] found id: "0ca092b3ce3ee7591a69a1325fcce1cc752a14da32abb9827a163b163f63c990"
	I1227 20:29:03.054837  344572 cri.go:96] found id: ""
	I1227 20:29:03.054885  344572 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:03.438934  344572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:03.457234  344572 pause.go:52] kubelet running: false
	I1227 20:29:03.457303  344572 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:03.667748  344572 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:03.667837  344572 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:03.756607  344572 cri.go:96] found id: "c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4"
	I1227 20:29:03.756825  344572 cri.go:96] found id: "b9b509ee6a53f3f461dc21f5d20cb2ed21b39cc41369daf17da2bf1e93644530"
	I1227 20:29:03.756842  344572 cri.go:96] found id: "e95e2820359521d157e6543a261cc1ecc9b5fcfaec66bf820cd24c038ec2d52f"
	I1227 20:29:03.756848  344572 cri.go:96] found id: "5e16382753ebc8b7372b1654da2876be48c0fdc56c35110cbf3c811d7a0f6ed0"
	I1227 20:29:03.756852  344572 cri.go:96] found id: "b9a65772953dd8fa2257dfb6c558509023e860f7da1e48e5f6bfae91bae57d63"
	I1227 20:29:03.756858  344572 cri.go:96] found id: "ccbaca54231342c2e05e7f6f39601d2b1cde71a9e688363aa99cd9f16b7b740b"
	I1227 20:29:03.756863  344572 cri.go:96] found id: "7e08e2ae41d9f45611fbcd90fd0c42176961f4b3b41557748ee9cae592b1a0c6"
	I1227 20:29:03.756867  344572 cri.go:96] found id: "a18c71b2140b36f980a1c87567071c8701b3e5f8aa4ab2f6fb52beb4656e9bd2"
	I1227 20:29:03.756872  344572 cri.go:96] found id: "455bcec8a175a162715bf84ffdfb0f031b741ff15312c1ed41956c3dafb97b6e"
	I1227 20:29:03.756928  344572 cri.go:96] found id: "a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1"
	I1227 20:29:03.756939  344572 cri.go:96] found id: "0ca092b3ce3ee7591a69a1325fcce1cc752a14da32abb9827a163b163f63c990"
	I1227 20:29:03.756944  344572 cri.go:96] found id: ""
	I1227 20:29:03.757103  344572 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:03.774991  344572 out.go:203] 
	W1227 20:29:03.776145  344572 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:29:03.776164  344572 out.go:285] * 
	* 
	W1227 20:29:03.778652  344572 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:29:03.779720  344572 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-014435 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-014435
helpers_test.go:244: (dbg) docker inspect no-preload-014435:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091",
	        "Created": "2025-12-27T20:26:44.562734517Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329749,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:27:58.669801555Z",
	            "FinishedAt": "2025-12-27T20:27:57.667942615Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/hosts",
	        "LogPath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091-json.log",
	        "Name": "/no-preload-014435",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-014435:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-014435",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091",
	                "LowerDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373/merged",
	                "UpperDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373/diff",
	                "WorkDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-014435",
	                "Source": "/var/lib/docker/volumes/no-preload-014435/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-014435",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-014435",
	                "name.minikube.sigs.k8s.io": "no-preload-014435",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a4adaeaae06cff8ec29ec07cd00d06c6c44ccd16fdf2c795372c00fb52115742",
	            "SandboxKey": "/var/run/docker/netns/a4adaeaae06c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-014435": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da47a33f1df0e45ac0871af30769ae1b8230bf0f77cd43d071316f15c5ec0145",
	                    "EndpointID": "4115876f23751fee1d7adc732e225b724b5e3af60589ff142ba4fc8783a35e37",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "be:1b:28:d6:66:f8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-014435",
	                        "8d514d0c2855"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014435 -n no-preload-014435
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014435 -n no-preload-014435: exit status 2 (411.207539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-014435 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-014435 logs -n 25: (1.423101874s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-436655 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo crio config                                                                                                                                                                                                             │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ delete  │ -p bridge-436655                                                                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ delete  │ -p disable-driver-mounts-541137                                                                                                                                                                                                               │ disable-driver-mounts-541137 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ stop    │ -p no-preload-014435 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p no-preload-014435 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p embed-certs-820583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-954154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-820583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-954154 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ image   │ old-k8s-version-762177 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ pause   │ -p old-k8s-version-762177 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-954154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ image   │ no-preload-014435 image list --format=json                                                                                                                                                                                                    │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p no-preload-014435 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:28:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:28:48.500169  340625 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:28:48.500408  340625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:48.500418  340625 out.go:374] Setting ErrFile to fd 2...
	I1227 20:28:48.500422  340625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:48.500700  340625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:28:48.501196  340625 out.go:368] Setting JSON to false
	I1227 20:28:48.502349  340625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4277,"bootTime":1766863051,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:28:48.502402  340625 start.go:143] virtualization: kvm guest
	I1227 20:28:48.504445  340625 out.go:179] * [newest-cni-307728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:28:48.506067  340625 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:28:48.506073  340625 notify.go:221] Checking for updates...
	I1227 20:28:48.507389  340625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:28:48.510117  340625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:48.511411  340625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:28:48.516227  340625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:28:48.520113  340625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:28:48.522736  340625 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:48.522908  340625 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:48.523079  340625 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:48.523223  340625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:28:48.555608  340625 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:28:48.555757  340625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:48.625448  340625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:28:48.613118826 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:48.625559  340625 docker.go:319] overlay module found
	I1227 20:28:48.627785  340625 out.go:179] * Using the docker driver based on user configuration
	I1227 20:28:48.628870  340625 start.go:309] selected driver: docker
	I1227 20:28:48.628893  340625 start.go:928] validating driver "docker" against <nil>
	I1227 20:28:48.628904  340625 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:28:48.629485  340625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:48.682637  340625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:28:48.673679788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:48.682799  340625 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1227 20:28:48.682830  340625 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1227 20:28:48.683062  340625 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:28:48.684798  340625 out.go:179] * Using Docker driver with root privileges
	I1227 20:28:48.685773  340625 cni.go:84] Creating CNI manager for ""
	I1227 20:28:48.685860  340625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:48.685876  340625 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:28:48.685963  340625 start.go:353] cluster config:
	{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:48.687269  340625 out.go:179] * Starting "newest-cni-307728" primary control-plane node in "newest-cni-307728" cluster
	I1227 20:28:48.688261  340625 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:28:48.689286  340625 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:28:48.690196  340625 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:48.690233  340625 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:28:48.690256  340625 cache.go:65] Caching tarball of preloaded images
	I1227 20:28:48.690277  340625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:28:48.690345  340625 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:28:48.690356  340625 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:28:48.690441  340625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:28:48.690458  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json: {Name:mke21830f72797f51981ebb2ed1e325363bf8b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:48.710101  340625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:28:48.710116  340625 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:28:48.710130  340625 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:28:48.710159  340625 start.go:360] acquireMachinesLock for newest-cni-307728: {Name:mk68119d6288f8bd1ffbe980f508592c691efe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:28:48.710240  340625 start.go:364] duration metric: took 67.403µs to acquireMachinesLock for "newest-cni-307728"
	I1227 20:28:48.710260  340625 start.go:93] Provisioning new machine with config: &{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:48.710333  340625 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:28:48.407162  329454 pod_ready.go:83] waiting for pod "kube-proxy-ctvzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:48.806633  329454 pod_ready.go:94] pod "kube-proxy-ctvzq" is "Ready"
	I1227 20:28:48.806662  329454 pod_ready.go:86] duration metric: took 399.473531ms for pod "kube-proxy-ctvzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:49.008047  329454 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:49.407573  329454 pod_ready.go:94] pod "kube-scheduler-no-preload-014435" is "Ready"
	I1227 20:28:49.407604  329454 pod_ready.go:86] duration metric: took 399.528497ms for pod "kube-scheduler-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:49.407621  329454 pod_ready.go:40] duration metric: took 39.908277209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:49.460861  329454 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:28:49.462607  329454 out.go:179] * Done! kubectl is now configured to use "no-preload-014435" cluster and "default" namespace by default
	I1227 20:28:47.861047  340025 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-954154" ...
	I1227 20:28:47.861132  340025 cli_runner.go:164] Run: docker start default-k8s-diff-port-954154
	I1227 20:28:48.268150  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:48.287351  340025 kic.go:430] container "default-k8s-diff-port-954154" state is running.
	I1227 20:28:48.287786  340025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-954154
	I1227 20:28:48.306988  340025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/config.json ...
	I1227 20:28:48.307243  340025 machine.go:94] provisionDockerMachine start ...
	I1227 20:28:48.307328  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:48.327277  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:48.327553  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:48.327574  340025 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:28:48.328160  340025 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45072->127.0.0.1:33123: read: connection reset by peer
	I1227 20:28:51.449690  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-954154
	
	I1227 20:28:51.449714  340025 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-954154"
	I1227 20:28:51.449773  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:51.468688  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:51.468993  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:51.469013  340025 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-954154 && echo "default-k8s-diff-port-954154" | sudo tee /etc/hostname
	I1227 20:28:51.599535  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-954154
	
	I1227 20:28:51.599621  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:51.617442  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:51.617738  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:51.617772  340025 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-954154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-954154/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-954154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:51.741706  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:51.741734  340025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:51.741756  340025 ubuntu.go:190] setting up certificates
	I1227 20:28:51.741775  340025 provision.go:84] configureAuth start
	I1227 20:28:51.741846  340025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-954154
	I1227 20:28:51.759749  340025 provision.go:143] copyHostCerts
	I1227 20:28:51.759817  340025 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:51.759836  340025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:51.759925  340025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:51.760058  340025 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:51.760071  340025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:51.760106  340025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:51.760260  340025 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:51.760273  340025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:51.760304  340025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:51.760381  340025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-954154 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-954154 localhost minikube]
	I1227 20:28:51.832661  340025 provision.go:177] copyRemoteCerts
	I1227 20:28:51.832729  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:51.832777  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:51.851087  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:51.942212  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:51.959450  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 20:28:51.975995  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:28:51.992950  340025 provision.go:87] duration metric: took 251.152964ms to configureAuth
	I1227 20:28:51.992976  340025 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:51.993158  340025 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:51.993315  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:52.011400  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:52.011631  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:52.011652  340025 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1227 20:28:48.641299  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:28:51.141968  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:28:48.711836  340625 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:28:48.712071  340625 start.go:159] libmachine.API.Create for "newest-cni-307728" (driver="docker")
	I1227 20:28:48.712103  340625 client.go:173] LocalClient.Create starting
	I1227 20:28:48.712145  340625 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem
	I1227 20:28:48.712175  340625 main.go:144] libmachine: Decoding PEM data...
	I1227 20:28:48.712193  340625 main.go:144] libmachine: Parsing certificate...
	I1227 20:28:48.712238  340625 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem
	I1227 20:28:48.712255  340625 main.go:144] libmachine: Decoding PEM data...
	I1227 20:28:48.712265  340625 main.go:144] libmachine: Parsing certificate...
	I1227 20:28:48.712589  340625 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:28:48.727613  340625 cli_runner.go:211] docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:28:48.727667  340625 network_create.go:284] running [docker network inspect newest-cni-307728] to gather additional debugging logs...
	I1227 20:28:48.727682  340625 cli_runner.go:164] Run: docker network inspect newest-cni-307728
	W1227 20:28:48.743879  340625 cli_runner.go:211] docker network inspect newest-cni-307728 returned with exit code 1
	I1227 20:28:48.743905  340625 network_create.go:287] error running [docker network inspect newest-cni-307728]: docker network inspect newest-cni-307728: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-307728 not found
	I1227 20:28:48.743927  340625 network_create.go:289] output of [docker network inspect newest-cni-307728]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-307728 not found
	
	** /stderr **
	I1227 20:28:48.744059  340625 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:48.760635  340625 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0db0ba8938bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:24:76:f1:9a:26} reservation:<nil>}
	I1227 20:28:48.761253  340625 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11f8d597a005 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:b4:6c:7e:ff:91} reservation:<nil>}
	I1227 20:28:48.762075  340625 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7cf3350a110 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:14:0b:19:b4:4d} reservation:<nil>}
	I1227 20:28:48.762703  340625 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-df613bfb14c3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:e7:81:22:a5:aa} reservation:<nil>}
	I1227 20:28:48.763398  340625 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8bb8ec9ff71c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:ba:45:ee:97:15} reservation:<nil>}
	I1227 20:28:48.763977  340625 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-da47a33f1df0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:b6:9e:57:b1:b3:31} reservation:<nil>}
	I1227 20:28:48.764830  340625 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee9150}
	I1227 20:28:48.764849  340625 network_create.go:124] attempt to create docker network newest-cni-307728 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1227 20:28:48.764905  340625 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-307728 newest-cni-307728
	I1227 20:28:48.812651  340625 network_create.go:108] docker network newest-cni-307728 192.168.103.0/24 created
	I1227 20:28:48.812678  340625 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-307728" container
	I1227 20:28:48.812754  340625 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:28:48.829760  340625 cli_runner.go:164] Run: docker volume create newest-cni-307728 --label name.minikube.sigs.k8s.io=newest-cni-307728 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:28:48.846811  340625 oci.go:103] Successfully created a docker volume newest-cni-307728
	I1227 20:28:48.846879  340625 cli_runner.go:164] Run: docker run --rm --name newest-cni-307728-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-307728 --entrypoint /usr/bin/test -v newest-cni-307728:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:28:49.251301  340625 oci.go:107] Successfully prepared a docker volume newest-cni-307728
	I1227 20:28:49.251356  340625 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:49.251371  340625 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:28:49.251443  340625 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-307728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:28:53.048259  340625 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-307728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.796770201s)
	I1227 20:28:53.048298  340625 kic.go:203] duration metric: took 3.796923553s to extract preloaded images to volume ...
	W1227 20:28:53.048388  340625 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 20:28:53.048428  340625 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 20:28:53.048478  340625 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:28:53.106204  340625 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-307728 --name newest-cni-307728 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-307728 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-307728 --network newest-cni-307728 --ip 192.168.103.2 --volume newest-cni-307728:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:28:53.377715  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Running}}
	I1227 20:28:53.396903  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:28:53.419741  340625 cli_runner.go:164] Run: docker exec newest-cni-307728 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:28:53.470424  340625 oci.go:144] the created container "newest-cni-307728" has a running status.
	I1227 20:28:53.470467  340625 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa...
	I1227 20:28:53.122891  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:53.122939  340025 machine.go:97] duration metric: took 4.815676959s to provisionDockerMachine
	I1227 20:28:53.122954  340025 start.go:293] postStartSetup for "default-k8s-diff-port-954154" (driver="docker")
	I1227 20:28:53.122967  340025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:53.123032  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:53.123077  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.143650  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.241428  340025 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:53.245431  340025 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:53.245463  340025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:53.245476  340025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:53.245527  340025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:53.245638  340025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:53.245750  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:53.259754  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:53.277646  340025 start.go:296] duration metric: took 154.680132ms for postStartSetup
	I1227 20:28:53.277719  340025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:53.277784  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.296596  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.388191  340025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:53.393370  340025 fix.go:56] duration metric: took 5.55357325s for fixHost
	I1227 20:28:53.393399  340025 start.go:83] releasing machines lock for "default-k8s-diff-port-954154", held for 5.5536385s
	I1227 20:28:53.393469  340025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-954154
	I1227 20:28:53.414844  340025 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:53.414940  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.414964  340025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:53.415054  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.439070  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.441062  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.530425  340025 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:53.593039  340025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:28:53.635364  340025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:53.640899  340025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:53.641235  340025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:53.650333  340025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:28:53.650354  340025 start.go:496] detecting cgroup driver to use...
	I1227 20:28:53.650397  340025 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:53.650439  340025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:53.668529  340025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:53.689731  340025 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:53.689800  340025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:53.709961  340025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:53.727001  340025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:53.832834  340025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:53.922310  340025 docker.go:234] disabling docker service ...
	I1227 20:28:53.922365  340025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:53.936501  340025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:53.950162  340025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:54.041512  340025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:54.134155  340025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:54.147385  340025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:54.161720  340025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:54.161796  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.171044  340025 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:54.171106  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.180065  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.189150  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.197442  340025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:54.205514  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.213985  340025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.222017  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.230077  340025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:54.237131  340025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:54.244100  340025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:54.328674  340025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:54.465250  340025 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:54.465333  340025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:54.469352  340025 start.go:574] Will wait 60s for crictl version
	I1227 20:28:54.469401  340025 ssh_runner.go:195] Run: which crictl
	I1227 20:28:54.472893  340025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:54.498891  340025 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:54.498989  340025 ssh_runner.go:195] Run: crio --version
	I1227 20:28:54.526206  340025 ssh_runner.go:195] Run: crio --version
	I1227 20:28:54.556148  340025 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:54.557374  340025 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-954154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:54.575049  340025 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:54.578875  340025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:54.588766  340025 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-954154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-954154 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:54.588870  340025 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:54.588927  340025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:54.619011  340025 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:54.619030  340025 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:28:54.619069  340025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:54.646154  340025 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:54.646177  340025 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:54.646185  340025 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I1227 20:28:54.646334  340025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-954154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-954154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:54.646422  340025 ssh_runner.go:195] Run: crio config
	I1227 20:28:54.692232  340025 cni.go:84] Creating CNI manager for ""
	I1227 20:28:54.692253  340025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:54.692268  340025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:28:54.692305  340025 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-954154 NodeName:default-k8s-diff-port-954154 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:54.692423  340025 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-954154"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:54.692483  340025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:54.700975  340025 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:54.701056  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:54.709484  340025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 20:28:54.722400  340025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:54.735438  340025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1227 20:28:54.747514  340025 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:54.751059  340025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:54.761269  340025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:54.842277  340025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:54.869119  340025 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154 for IP: 192.168.85.2
	I1227 20:28:54.869143  340025 certs.go:195] generating shared ca certs ...
	I1227 20:28:54.869164  340025 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:54.869322  340025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:54.869377  340025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:54.869391  340025 certs.go:257] generating profile certs ...
	I1227 20:28:54.869519  340025 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/client.key
	I1227 20:28:54.869600  340025 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/apiserver.key.b37aaa7a
	I1227 20:28:54.869654  340025 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/proxy-client.key
	I1227 20:28:54.869797  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:54.869837  340025 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:54.869849  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:54.869881  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:54.869933  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:54.869976  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:54.870034  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:54.870823  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:54.889499  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:54.908467  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:54.928722  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:54.956319  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:28:54.976184  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:28:54.992715  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:55.009591  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:28:55.025543  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:55.042531  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:55.061224  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:55.081310  340025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:55.095303  340025 ssh_runner.go:195] Run: openssl version
	I1227 20:28:55.101512  340025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.109364  340025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:55.117062  340025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.120521  340025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.120562  340025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.156522  340025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:55.163769  340025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.170984  340025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:55.178467  340025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.182664  340025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.182714  340025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.216669  340025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:55.224508  340025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.231727  340025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:55.240655  340025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.244863  340025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.244927  340025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.281470  340025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:55.288784  340025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:55.292510  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:28:55.333080  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:28:55.369784  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:28:55.424693  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:28:55.475758  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:28:55.533819  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:28:55.591758  340025 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-954154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-954154 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:55.591848  340025 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:55.591890  340025 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:55.627989  340025 cri.go:96] found id: "5931439a8c0a571b5456aa8ff89ec7efa4a328c297d4825276b5ce8da13b99a6"
	I1227 20:28:55.628014  340025 cri.go:96] found id: "706a22c5fabaf1ee5036f9f7e94b4c7a02acaeb2002009e6416b6682929512f2"
	I1227 20:28:55.628020  340025 cri.go:96] found id: "0959afbe1a9958167e4657feae2ee275c220e183182d01f3bfb3c248002f75ad"
	I1227 20:28:55.628027  340025 cri.go:96] found id: "8b0860861ddb032e4078fd3575653ab5f5a77cc25e17d9ecb7910330acbef6e8"
	I1227 20:28:55.628032  340025 cri.go:96] found id: ""
	I1227 20:28:55.628077  340025 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:28:55.642876  340025 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:55Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:28:55.642973  340025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:55.652554  340025 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:28:55.652578  340025 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:28:55.652625  340025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:28:55.660979  340025 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:28:55.662107  340025 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-954154" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:55.662856  340025 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-954154" cluster setting kubeconfig missing "default-k8s-diff-port-954154" context setting]
	I1227 20:28:55.664153  340025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:55.666338  340025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:28:55.676564  340025 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 20:28:55.676593  340025 kubeadm.go:602] duration metric: took 24.008347ms to restartPrimaryControlPlane
	I1227 20:28:55.676602  340025 kubeadm.go:403] duration metric: took 84.854268ms to StartCluster
	I1227 20:28:55.676617  340025 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:55.676673  340025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:55.678946  340025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:55.679180  340025 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:55.679553  340025 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:55.679619  340025 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:28:55.679775  340025 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:55.679791  340025 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-954154"
	W1227 20:28:55.679799  340025 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:28:55.679823  340025 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:55.679928  340025 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:55.679956  340025 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-954154"
	W1227 20:28:55.679964  340025 addons.go:248] addon dashboard should already be in state true
	I1227 20:28:55.679991  340025 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:55.680547  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.680638  340025 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:55.680657  340025 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-954154"
	I1227 20:28:55.681186  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.683393  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.683653  340025 out.go:179] * Verifying Kubernetes components...
	I1227 20:28:55.684780  340025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:55.714633  340025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:28:55.716852  340025 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-954154"
	W1227 20:28:55.716877  340025 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:28:55.716906  340025 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:55.717089  340025 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:28:55.717135  340025 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:55.717147  340025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:28:55.717215  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:55.717777  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.722759  340025 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:28:53.664479  340625 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:28:53.699126  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:28:53.720716  340625 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:28:53.720739  340625 kic_runner.go:114] Args: [docker exec --privileged newest-cni-307728 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:28:53.774092  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:28:53.795085  340625 machine.go:94] provisionDockerMachine start ...
	I1227 20:28:53.795200  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:53.815121  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:53.815367  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:53.815380  340625 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:28:53.946421  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:28:53.946449  340625 ubuntu.go:182] provisioning hostname "newest-cni-307728"
	I1227 20:28:53.946514  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:53.967479  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:53.967688  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:53.967701  340625 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-307728 && echo "newest-cni-307728" | sudo tee /etc/hostname
	I1227 20:28:54.109706  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:28:54.109778  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.129736  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:54.129958  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:54.129980  340625 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307728/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:54.255088  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:54.255111  340625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:54.255160  340625 ubuntu.go:190] setting up certificates
	I1227 20:28:54.255172  340625 provision.go:84] configureAuth start
	I1227 20:28:54.255217  340625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:28:54.276936  340625 provision.go:143] copyHostCerts
	I1227 20:28:54.276997  340625 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:54.277008  340625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:54.277094  340625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:54.277219  340625 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:54.277228  340625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:54.277279  340625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:54.277365  340625 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:54.277372  340625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:54.277407  340625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:54.277482  340625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-307728 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-307728]
	I1227 20:28:54.307332  340625 provision.go:177] copyRemoteCerts
	I1227 20:28:54.307382  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:54.307415  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.325258  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:54.419033  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:54.438154  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:28:54.455050  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:28:54.472709  340625 provision.go:87] duration metric: took 217.519219ms to configureAuth
	I1227 20:28:54.472736  340625 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:54.472956  340625 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:54.473073  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.492336  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:54.492642  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:54.492669  340625 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:28:54.753361  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:54.753389  340625 machine.go:97] duration metric: took 958.279107ms to provisionDockerMachine
	I1227 20:28:54.753401  340625 client.go:176] duration metric: took 6.041292407s to LocalClient.Create
	I1227 20:28:54.753424  340625 start.go:167] duration metric: took 6.041353878s to libmachine.API.Create "newest-cni-307728"
	I1227 20:28:54.753439  340625 start.go:293] postStartSetup for "newest-cni-307728" (driver="docker")
	I1227 20:28:54.753451  340625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:54.753523  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:54.753568  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.772791  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:54.870458  340625 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:54.874573  340625 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:54.874605  340625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:54.874618  340625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:54.874671  340625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:54.874756  340625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:54.874874  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:54.883036  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:54.906634  340625 start.go:296] duration metric: took 153.179795ms for postStartSetup
	I1227 20:28:54.907029  340625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:28:54.928933  340625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:28:54.929249  340625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:54.929300  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.954691  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:55.044982  340625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:55.049336  340625 start.go:128] duration metric: took 6.338989786s to createHost
	I1227 20:28:55.049357  340625 start.go:83] releasing machines lock for "newest-cni-307728", held for 6.339107658s
	I1227 20:28:55.049418  340625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:28:55.070462  340625 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:55.070526  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:55.070556  340625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:55.070631  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:55.089304  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:55.090352  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:55.233933  340625 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:55.241758  340625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:28:55.275894  340625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:55.280648  340625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:55.280715  340625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:55.307733  340625 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 20:28:55.307753  340625 start.go:496] detecting cgroup driver to use...
	I1227 20:28:55.307785  340625 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:55.307839  340625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:55.323192  340625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:55.335205  340625 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:55.335265  340625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:55.351180  340625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:55.369175  340625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:55.473778  340625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:55.602420  340625 docker.go:234] disabling docker service ...
	I1227 20:28:55.602482  340625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:55.625550  340625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:55.643841  340625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:55.802566  340625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:55.918642  340625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:55.936192  340625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:55.955225  340625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:55.955288  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:55.966672  340625 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:55.966742  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:55.978502  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:55.989239  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.000177  340625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:56.009564  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.022264  340625 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.037345  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.049451  340625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:56.056993  340625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:56.064796  340625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:56.162614  340625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:56.313255  340625 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:56.313324  340625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:56.317889  340625 start.go:574] Will wait 60s for crictl version
	I1227 20:28:56.317981  340625 ssh_runner.go:195] Run: which crictl
	I1227 20:28:56.322051  340625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:56.349449  340625 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:56.349532  340625 ssh_runner.go:195] Run: crio --version
	I1227 20:28:56.382000  340625 ssh_runner.go:195] Run: crio --version
	I1227 20:28:56.413048  340625 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:56.414278  340625 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:56.433453  340625 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:56.437559  340625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:56.449949  340625 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:28:55.724000  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:28:55.724016  340025 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:28:55.724065  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:55.744982  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:55.748159  340025 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:55.748180  340025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:28:55.748239  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:55.754480  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:55.778258  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:55.867117  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:28:55.867140  340025 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:28:55.872461  340025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:55.875196  340025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:55.883874  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:28:55.883895  340025 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:28:55.887443  340025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:55.901123  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:28:55.901148  340025 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:28:55.918460  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:28:55.918485  340025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:28:55.937315  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:28:55.937335  340025 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:28:55.952528  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:28:55.952556  340025 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:28:55.967852  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:28:55.967875  340025 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:28:55.984392  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:28:55.984418  340025 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:28:56.000591  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:56.000616  340025 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:28:56.014356  340025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:57.527657  340025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.655168953s)
	I1227 20:28:57.527711  340025 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.652481169s)
	I1227 20:28:57.527762  340025 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-954154" to be "Ready" ...
	I1227 20:28:57.527787  340025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.513395036s)
	I1227 20:28:57.527723  340025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.640259424s)
	I1227 20:28:57.529812  340025 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-954154 addons enable metrics-server
	
	I1227 20:28:57.536501  340025 node_ready.go:49] node "default-k8s-diff-port-954154" is "Ready"
	I1227 20:28:57.536525  340025 node_ready.go:38] duration metric: took 8.726968ms for node "default-k8s-diff-port-954154" to be "Ready" ...
	I1227 20:28:57.536540  340025 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:28:57.536581  340025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:28:57.541048  340025 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:28:57.542123  340025 addons.go:530] duration metric: took 1.862504727s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:28:57.549333  340025 api_server.go:72] duration metric: took 1.870126325s to wait for apiserver process to appear ...
	I1227 20:28:57.549353  340025 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:28:57.549370  340025 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:57.553748  340025 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:57.553768  340025 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:56.450940  340625 kubeadm.go:884] updating cluster {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:56.451057  340625 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:56.451105  340625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:56.486578  340625 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:56.486604  340625 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:28:56.486659  340625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:56.516779  340625 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:56.516806  340625 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:56.516814  340625 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I1227 20:28:56.516942  340625 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-307728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:56.517034  340625 ssh_runner.go:195] Run: crio config
	I1227 20:28:56.564462  340625 cni.go:84] Creating CNI manager for ""
	I1227 20:28:56.564481  340625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:56.564497  340625 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:28:56.564520  340625 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-307728 NodeName:newest-cni-307728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:56.564660  340625 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-307728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:56.564717  340625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:56.574206  340625 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:56.574276  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:56.582079  340625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:28:56.600287  340625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:56.616380  340625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:28:56.629039  340625 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:56.632734  340625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:56.643610  340625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:56.731167  340625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:56.767503  340625 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728 for IP: 192.168.103.2
	I1227 20:28:56.767525  340625 certs.go:195] generating shared ca certs ...
	I1227 20:28:56.767558  340625 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.767733  340625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:56.767803  340625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:56.767817  340625 certs.go:257] generating profile certs ...
	I1227 20:28:56.767890  340625 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key
	I1227 20:28:56.767942  340625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.crt with IP's: []
	I1227 20:28:56.794375  340625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.crt ...
	I1227 20:28:56.794408  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.crt: {Name:mkbe31918a2628f8309a18a3c482be7f59d5e510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.794621  340625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key ...
	I1227 20:28:56.794636  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key: {Name:mkbc3d519f763199b338bf70577fc2817f7c4332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.794741  340625 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df
	I1227 20:28:56.794772  340625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1227 20:28:56.879148  340625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df ...
	I1227 20:28:56.879178  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df: {Name:mk64269dd374c740149f7faf9e729189e8331f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.879382  340625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df ...
	I1227 20:28:56.879400  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df: {Name:mkc2c754a6d53e33d9862453e662ca2209e188d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.879503  340625 certs.go:382] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt
	I1227 20:28:56.879600  340625 certs.go:386] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key
	I1227 20:28:56.879659  340625 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key
	I1227 20:28:56.879674  340625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt with IP's: []
	I1227 20:28:56.951167  340625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt ...
	I1227 20:28:56.951204  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt: {Name:mk61de4f8eabcfb14024a7f87b814c37a2ed9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.951385  340625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key ...
	I1227 20:28:56.951404  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key: {Name:mk921c81a121096b317f7cf3e18e26665afa5455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.951654  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:56.951708  340625 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:56.951725  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:56.951762  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:56.951794  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:56.951828  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:56.951885  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:56.952685  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:56.989260  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:57.016199  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:57.038448  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:57.063231  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:28:57.083056  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:28:57.103361  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:57.124895  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:28:57.146997  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:57.168985  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:57.192337  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:57.212648  340625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:57.226053  340625 ssh_runner.go:195] Run: openssl version
	I1227 20:28:57.232690  340625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.240634  340625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:57.248278  340625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.253026  340625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.253083  340625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.293170  340625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:57.302311  340625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:28:57.310221  340625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.319835  340625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:57.328517  340625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.333508  340625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.333570  340625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.385727  340625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:57.395544  340625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14427.pem /etc/ssl/certs/51391683.0
	I1227 20:28:57.405387  340625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.414374  340625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:57.422682  340625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.426727  340625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.426781  340625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.468027  340625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:57.475579  340625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/144272.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:57.482669  340625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:57.486989  340625 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:28:57.487049  340625 kubeadm.go:401] StartCluster: {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:57.487127  340625 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:57.487176  340625 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:57.526113  340625 cri.go:96] found id: ""
	I1227 20:28:57.526185  340625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:57.535676  340625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:28:57.544346  340625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:28:57.544400  340625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:28:57.552362  340625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:28:57.552381  340625 kubeadm.go:158] found existing configuration files:
	
	I1227 20:28:57.552419  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:28:57.559859  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:28:57.559901  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:28:57.566894  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:28:57.574224  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:28:57.574271  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:28:57.581383  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:28:57.588654  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:28:57.588689  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:28:57.595675  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:28:57.603162  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:28:57.603207  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:28:57.610120  340625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:28:57.651578  340625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:28:57.651650  340625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:28:57.717226  340625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:28:57.717315  340625 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 20:28:57.717358  340625 kubeadm.go:319] OS: Linux
	I1227 20:28:57.717448  340625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:28:57.717519  340625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:28:57.717567  340625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:28:57.717647  340625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:28:57.717733  340625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:28:57.717812  340625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:28:57.717923  340625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:28:57.717998  340625 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 20:28:57.774331  340625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:28:57.774452  340625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:28:57.774590  340625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:28:57.781865  340625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1227 20:28:53.641780  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:28:56.141575  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:28:57.784248  340625 out.go:252]   - Generating certificates and keys ...
	I1227 20:28:57.784354  340625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:28:57.784471  340625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:28:57.800338  340625 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:28:57.829651  340625 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:28:57.870093  340625 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:28:58.023851  340625 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:28:58.175326  340625 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:28:58.175458  340625 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-307728] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1227 20:28:58.227767  340625 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:28:58.227948  340625 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-307728] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1227 20:28:58.327146  340625 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:28:58.413976  340625 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:28:58.519514  340625 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:28:58.519622  340625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:28:58.602374  340625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:28:58.658792  340625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:28:58.828754  340625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:28:58.899131  340625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:28:58.981756  340625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:28:58.982297  340625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:28:58.986398  340625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:28:58.050409  340025 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:58.055041  340025 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:58.055071  340025 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:58.549732  340025 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:58.554808  340025 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1227 20:28:58.555845  340025 api_server.go:141] control plane version: v1.35.0
	I1227 20:28:58.555884  340025 api_server.go:131] duration metric: took 1.006522468s to wait for apiserver health ...
	I1227 20:28:58.555894  340025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:28:58.607197  340025 system_pods.go:59] 8 kube-system pods found
	I1227 20:28:58.607235  340025 system_pods.go:61] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:58.607245  340025 system_pods.go:61] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:58.607258  340025 system_pods.go:61] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:28:58.607263  340025 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:58.607273  340025 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:58.607281  340025 system_pods.go:61] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:28:58.607286  340025 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:58.607292  340025 system_pods.go:61] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:28:58.607299  340025 system_pods.go:74] duration metric: took 51.39957ms to wait for pod list to return data ...
	I1227 20:28:58.607309  340025 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:28:58.609698  340025 default_sa.go:45] found service account: "default"
	I1227 20:28:58.609718  340025 default_sa.go:55] duration metric: took 2.396384ms for default service account to be created ...
	I1227 20:28:58.609726  340025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:28:58.612207  340025 system_pods.go:86] 8 kube-system pods found
	I1227 20:28:58.612229  340025 system_pods.go:89] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:58.612237  340025 system_pods.go:89] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:58.612250  340025 system_pods.go:89] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:28:58.612256  340025 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:58.612266  340025 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:58.612271  340025 system_pods.go:89] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:28:58.612282  340025 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:58.612294  340025 system_pods.go:89] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:28:58.612305  340025 system_pods.go:126] duration metric: took 2.569534ms to wait for k8s-apps to be running ...
	I1227 20:28:58.612315  340025 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:28:58.612351  340025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:58.624877  340025 system_svc.go:56] duration metric: took 12.557367ms WaitForService to wait for kubelet
	I1227 20:28:58.624898  340025 kubeadm.go:587] duration metric: took 2.945693199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:28:58.624959  340025 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:28:58.627235  340025 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:28:58.627258  340025 node_conditions.go:123] node cpu capacity is 8
	I1227 20:28:58.627273  340025 node_conditions.go:105] duration metric: took 2.308686ms to run NodePressure ...
	I1227 20:28:58.627296  340025 start.go:242] waiting for startup goroutines ...
	I1227 20:28:58.627310  340025 start.go:247] waiting for cluster config update ...
	I1227 20:28:58.627328  340025 start.go:256] writing updated cluster config ...
	I1227 20:28:58.627581  340025 ssh_runner.go:195] Run: rm -f paused
	I1227 20:28:58.631443  340025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:58.634602  340025 pod_ready.go:83] waiting for pod "coredns-7d764666f9-gtzdb" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:29:00.640993  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:28:58.641325  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:29:00.641748  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:29:02.647042  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:28:58.987788  340625 out.go:252]   - Booting up control plane ...
	I1227 20:28:58.987909  340625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:28:58.988350  340625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:28:58.991232  340625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:28:59.009829  340625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:28:59.010080  340625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:28:59.018540  340625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:28:59.018939  340625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:28:59.019013  340625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:28:59.122102  340625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:28:59.122243  340625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:28:59.623869  340625 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.822281ms
	I1227 20:28:59.626698  340625 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:28:59.626835  340625 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1227 20:28:59.626991  340625 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:28:59.627081  340625 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:29:00.132655  340625 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.729406ms
	I1227 20:29:01.422270  340625 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.79552906s
	I1227 20:29:03.128834  340625 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502046336s
	I1227 20:29:03.152511  340625 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:29:03.162776  340625 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:29:03.172127  340625 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:29:03.172413  340625 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-307728 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:29:03.183365  340625 kubeadm.go:319] [bootstrap-token] Using token: m3fv2a.3hy2dotriyukxsjh
	I1227 20:29:03.184664  340625 out.go:252]   - Configuring RBAC rules ...
	I1227 20:29:03.184815  340625 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:29:03.188315  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:29:03.194363  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:29:03.196969  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:29:03.199431  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:29:03.201765  340625 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	
	
	==> CRI-O <==
	Dec 27 20:28:25 no-preload-014435 crio[566]: time="2025-12-27T20:28:25.27837067Z" level=info msg="Started container" PID=1742 containerID=9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper id=68599563-4452-42f3-b7d8-d4829f7f5bdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=f578af9f5f9f446bcedf8fa5800ed8103b745a26295e3dc9548bbb50c7f6fdea
	Dec 27 20:28:25 no-preload-014435 crio[566]: time="2025-12-27T20:28:25.335283029Z" level=info msg="Removing container: c06e8f182bdafe6d8a9f38a86e7345156bb2efd9ed93ead9991c0833628623c8" id=3045db0d-d603-4e9d-bd3d-8d585e1caeb2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:25 no-preload-014435 crio[566]: time="2025-12-27T20:28:25.345049792Z" level=info msg="Removed container c06e8f182bdafe6d8a9f38a86e7345156bb2efd9ed93ead9991c0833628623c8: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper" id=3045db0d-d603-4e9d-bd3d-8d585e1caeb2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.371022621Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5160c85f-dba0-42b3-9466-3d1c44075904 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.371905487Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ed99fd5e-8d38-463e-a051-c77311a00a05 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.373100296Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b79d63eb-ae4c-4b5d-aca5-ac82dd01195d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.373228533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.378326615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.378463493Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1e57e90e881c7ad626db887738d785ea0d0edb965443dca370b6c2fe47a990b8/merged/etc/passwd: no such file or directory"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.37848681Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1e57e90e881c7ad626db887738d785ea0d0edb965443dca370b6c2fe47a990b8/merged/etc/group: no such file or directory"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.378686234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.406864337Z" level=info msg="Created container c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4: kube-system/storage-provisioner/storage-provisioner" id=b79d63eb-ae4c-4b5d-aca5-ac82dd01195d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.407386694Z" level=info msg="Starting container: c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4" id=0f4888f3-4617-4411-9952-6b74b05e2565 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.40901011Z" level=info msg="Started container" PID=1758 containerID=c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4 description=kube-system/storage-provisioner/storage-provisioner id=0f4888f3-4617-4411-9952-6b74b05e2565 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c444ff1474e83735b4552bc2bebd9519a65a719c8e71305db4ae2b6e4c9b2502
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.237633314Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=375458a5-7129-4322-af7d-8e4f62071159 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.35679411Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0a831b7a-b9e2-4cc3-8ac3-a39b1d3cbcb5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.372984756Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper" id=f3bf7521-bffe-4804-965c-63d66e069a7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.373109021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.382184183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.382806898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.525339633Z" level=info msg="Created container a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper" id=f3bf7521-bffe-4804-965c-63d66e069a7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.526086801Z" level=info msg="Starting container: a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1" id=0062cd81-7534-428c-8bee-4eae2d49c9a6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.528403789Z" level=info msg="Started container" PID=1797 containerID=a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper id=0062cd81-7534-428c-8bee-4eae2d49c9a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f578af9f5f9f446bcedf8fa5800ed8103b745a26295e3dc9548bbb50c7f6fdea
	Dec 27 20:28:53 no-preload-014435 crio[566]: time="2025-12-27T20:28:53.414807865Z" level=info msg="Removing container: 9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65" id=2d891500-2e8c-4a54-a84c-59d811116b1b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:53 no-preload-014435 crio[566]: time="2025-12-27T20:28:53.429322429Z" level=info msg="Removed container 9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper" id=2d891500-2e8c-4a54-a84c-59d811116b1b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a67c4e635ce57       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   3                   f578af9f5f9f4       dashboard-metrics-scraper-867fb5f87b-zw6bk   kubernetes-dashboard
	c901e1afa45cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   c444ff1474e83       storage-provisioner                          kube-system
	0ca092b3ce3ee       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   e703844f36755       kubernetes-dashboard-b84665fb8-v6b7x         kubernetes-dashboard
	b9b509ee6a53f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   6da3e0d9e4163       coredns-7d764666f9-nvrq6                     kube-system
	0a5cfa6b4d2e8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   a077a2bcd6334       busybox                                      default
	e95e282035952       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           56 seconds ago      Running             kube-proxy                  0                   0c801e912bf19       kube-proxy-ctvzq                             kube-system
	5e16382753ebc       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           56 seconds ago      Running             kindnet-cni                 0                   dea99ef478cce       kindnet-7pgwz                                kube-system
	b9a65772953dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   c444ff1474e83       storage-provisioner                          kube-system
	ccbaca5423134       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           59 seconds ago      Running             kube-controller-manager     0                   56eafd8bcdf37       kube-controller-manager-no-preload-014435    kube-system
	7e08e2ae41d9f       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           59 seconds ago      Running             kube-scheduler              0                   127f0a21f7bd2       kube-scheduler-no-preload-014435             kube-system
	a18c71b2140b3       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           59 seconds ago      Running             kube-apiserver              0                   f8b580cdf6707       kube-apiserver-no-preload-014435             kube-system
	455bcec8a175a       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           59 seconds ago      Running             etcd                        0                   5fa2125c56149       etcd-no-preload-014435                       kube-system
	
	
	==> coredns [b9b509ee6a53f3f461dc21f5d20cb2ed21b39cc41369daf17da2bf1e93644530] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47735 - 36198 "HINFO IN 8077970765044174866.8899174393879063754. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069497073s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-014435
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-014435
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=no-preload-014435
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_27_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:27:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-014435
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:28:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:28:38 +0000   Sat, 27 Dec 2025 20:27:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:28:38 +0000   Sat, 27 Dec 2025 20:27:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:28:38 +0000   Sat, 27 Dec 2025 20:27:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:28:38 +0000   Sat, 27 Dec 2025 20:27:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-014435
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                16adf691-8e3a-4b05-b69e-6cb195641c2f
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-nvrq6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-014435                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-7pgwz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-014435              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-014435     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-ctvzq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-014435              100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-zw6bk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-v6b7x          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node no-preload-014435 event: Registered Node no-preload-014435 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-014435 event: Registered Node no-preload-014435 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [455bcec8a175a162715bf84ffdfb0f031b741ff15312c1ed41956c3dafb97b6e] <==
	{"level":"info","ts":"2025-12-27T20:28:05.876453Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-27T20:28:05.876652Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:28:06.758777Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:06.758840Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:06.758972Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:06.759005Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:06.759025Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:06.759523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:06.759562Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:06.759582Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:06.759592Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:06.760334Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-014435 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:28:06.760368Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:06.760427Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:06.760541Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:06.760565Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:06.761883Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:06.761871Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:06.765608Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-27T20:28:06.766523Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:28:52.383601Z","caller":"traceutil/trace.go:172","msg":"trace[512481360] linearizableReadLoop","detail":"{readStateIndex:681; appliedIndex:681; }","duration":"139.931957ms","start":"2025-12-27T20:28:52.243640Z","end":"2025-12-27T20:28:52.383572Z","steps":["trace[512481360] 'read index received'  (duration: 139.92184ms)","trace[512481360] 'applied index is now lower than readState.Index'  (duration: 8.63µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T20:28:52.383738Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.044729ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-27T20:28:52.383812Z","caller":"traceutil/trace.go:172","msg":"trace[2107378286] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:643; }","duration":"140.168248ms","start":"2025-12-27T20:28:52.243636Z","end":"2025-12-27T20:28:52.383804Z","steps":["trace[2107378286] 'agreement among raft nodes before linearized reading'  (duration: 140.017301ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:28:52.383808Z","caller":"traceutil/trace.go:172","msg":"trace[1165899591] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"142.595378ms","start":"2025-12-27T20:28:52.241198Z","end":"2025-12-27T20:28:52.383793Z","steps":["trace[1165899591] 'process raft request'  (duration: 142.409226ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:28:52.658487Z","caller":"traceutil/trace.go:172","msg":"trace[1031382787] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"128.351024ms","start":"2025-12-27T20:28:52.530120Z","end":"2025-12-27T20:28:52.658471Z","steps":["trace[1031382787] 'process raft request'  (duration: 128.253485ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:29:05 up  1:11,  0 user,  load average: 3.62, 3.24, 2.29
	Linux no-preload-014435 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e16382753ebc8b7372b1654da2876be48c0fdc56c35110cbf3c811d7a0f6ed0] <==
	I1227 20:28:08.859485       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:28:08.859850       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1227 20:28:08.860118       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:28:08.860176       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:28:08.860222       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:28:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:28:09.155755       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:28:09.255477       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:28:09.255690       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:28:09.255946       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:28:09.656007       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:28:09.656040       1 metrics.go:72] Registering metrics
	I1227 20:28:09.656129       1 controller.go:711] "Syncing nftables rules"
	I1227 20:28:19.155291       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:19.155346       1 main.go:301] handling current node
	I1227 20:28:29.155818       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:29.155854       1 main.go:301] handling current node
	I1227 20:28:39.155775       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:39.155813       1 main.go:301] handling current node
	I1227 20:28:49.155162       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:49.155204       1 main.go:301] handling current node
	I1227 20:28:59.156061       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:59.156111       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a18c71b2140b36f980a1c87567071c8701b3e5f8aa4ab2f6fb52beb4656e9bd2] <==
	I1227 20:28:07.834174       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:28:07.834274       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:28:07.834480       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:07.834523       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:28:07.834533       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1227 20:28:07.839106       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:28:07.839704       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:28:07.839801       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:28:07.839881       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:28:07.839906       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:28:07.839994       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:28:07.839980       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:28:07.843790       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:28:07.859936       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:28:08.202618       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:28:08.258252       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:28:08.280202       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:28:08.287727       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:28:08.298665       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:28:08.359900       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.50.78"}
	I1227 20:28:08.372155       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.89.181"}
	I1227 20:28:08.737069       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:28:11.371878       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:28:11.471744       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:28:11.621367       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ccbaca54231342c2e05e7f6f39601d2b1cde71a9e688363aa99cd9f16b7b740b] <==
	I1227 20:28:10.978642       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978686       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978714       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978755       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978788       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978818       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.977998       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979001       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.977878       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:10.977992       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979604       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979081       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979094       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979062       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979104       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979112       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.985564       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:10.979122       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979129       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979073       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.992951       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:11.078494       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:11.078521       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:28:11.078528       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:28:11.086516       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [e95e2820359521d157e6543a261cc1ecc9b5fcfaec66bf820cd24c038ec2d52f] <==
	I1227 20:28:08.680604       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:28:08.745496       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:08.845649       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:08.845691       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1227 20:28:08.845780       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:28:08.871818       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:28:08.871876       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:28:08.878621       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:28:08.879130       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:28:08.879163       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:08.881396       1 config.go:200] "Starting service config controller"
	I1227 20:28:08.881417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:28:08.881445       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:28:08.881451       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:28:08.881519       1 config.go:309] "Starting node config controller"
	I1227 20:28:08.881525       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:28:08.881534       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:28:08.881837       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:28:08.881850       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:28:08.981907       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:28:08.982002       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:28:08.982459       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7e08e2ae41d9f45611fbcd90fd0c42176961f4b3b41557748ee9cae592b1a0c6] <==
	I1227 20:28:06.131767       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:28:07.772433       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:28:07.772470       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:28:07.772482       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:28:07.772496       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:28:07.814936       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:28:07.814969       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:07.818476       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:28:07.818903       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:28:07.818981       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:07.819035       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:28:07.919309       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: E1227 20:28:25.237004     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: I1227 20:28:25.237035     709 scope.go:122] "RemoveContainer" containerID="c06e8f182bdafe6d8a9f38a86e7345156bb2efd9ed93ead9991c0833628623c8"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: I1227 20:28:25.333995     709 scope.go:122] "RemoveContainer" containerID="c06e8f182bdafe6d8a9f38a86e7345156bb2efd9ed93ead9991c0833628623c8"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: E1227 20:28:25.334203     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: I1227 20:28:25.334236     709 scope.go:122] "RemoveContainer" containerID="9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: E1227 20:28:25.334424     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zw6bk_kubernetes-dashboard(4a798b25-d765-486c-826b-777556a58641)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" podUID="4a798b25-d765-486c-826b-777556a58641"
	Dec 27 20:28:31 no-preload-014435 kubelet[709]: E1227 20:28:31.261217     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:31 no-preload-014435 kubelet[709]: I1227 20:28:31.261250     709 scope.go:122] "RemoveContainer" containerID="9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65"
	Dec 27 20:28:31 no-preload-014435 kubelet[709]: E1227 20:28:31.261398     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zw6bk_kubernetes-dashboard(4a798b25-d765-486c-826b-777556a58641)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" podUID="4a798b25-d765-486c-826b-777556a58641"
	Dec 27 20:28:39 no-preload-014435 kubelet[709]: I1227 20:28:39.370550     709 scope.go:122] "RemoveContainer" containerID="b9a65772953dd8fa2257dfb6c558509023e860f7da1e48e5f6bfae91bae57d63"
	Dec 27 20:28:47 no-preload-014435 kubelet[709]: E1227 20:28:47.879865     709 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvrq6" containerName="coredns"
	Dec 27 20:28:52 no-preload-014435 kubelet[709]: E1227 20:28:52.237041     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:52 no-preload-014435 kubelet[709]: I1227 20:28:52.237081     709 scope.go:122] "RemoveContainer" containerID="9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65"
	Dec 27 20:28:53 no-preload-014435 kubelet[709]: I1227 20:28:53.413239     709 scope.go:122] "RemoveContainer" containerID="9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65"
	Dec 27 20:28:53 no-preload-014435 kubelet[709]: E1227 20:28:53.413491     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:53 no-preload-014435 kubelet[709]: I1227 20:28:53.413522     709 scope.go:122] "RemoveContainer" containerID="a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1"
	Dec 27 20:28:53 no-preload-014435 kubelet[709]: E1227 20:28:53.413714     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zw6bk_kubernetes-dashboard(4a798b25-d765-486c-826b-777556a58641)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" podUID="4a798b25-d765-486c-826b-777556a58641"
	Dec 27 20:29:01 no-preload-014435 kubelet[709]: E1227 20:29:01.261009     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:01 no-preload-014435 kubelet[709]: I1227 20:29:01.261061     709 scope.go:122] "RemoveContainer" containerID="a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1"
	Dec 27 20:29:01 no-preload-014435 kubelet[709]: E1227 20:29:01.261266     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zw6bk_kubernetes-dashboard(4a798b25-d765-486c-826b-777556a58641)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" podUID="4a798b25-d765-486c-826b-777556a58641"
	Dec 27 20:29:01 no-preload-014435 kubelet[709]: I1227 20:29:01.700083     709 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 20:29:01 no-preload-014435 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:29:01 no-preload-014435 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:29:01 no-preload-014435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:29:01 no-preload-014435 systemd[1]: kubelet.service: Consumed 1.789s CPU time.
	
	
	==> kubernetes-dashboard [0ca092b3ce3ee7591a69a1325fcce1cc752a14da32abb9827a163b163f63c990] <==
	2025/12/27 20:28:17 Using namespace: kubernetes-dashboard
	2025/12/27 20:28:17 Using in-cluster config to connect to apiserver
	2025/12/27 20:28:17 Using secret token for csrf signing
	2025/12/27 20:28:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:28:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:28:17 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:28:17 Generating JWE encryption key
	2025/12/27 20:28:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:28:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:28:17 Initializing JWE encryption key from synchronized object
	2025/12/27 20:28:17 Creating in-cluster Sidecar client
	2025/12/27 20:28:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:17 Serving insecurely on HTTP port: 9090
	2025/12/27 20:28:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:17 Starting overwatch
	
	
	==> storage-provisioner [b9a65772953dd8fa2257dfb6c558509023e860f7da1e48e5f6bfae91bae57d63] <==
	I1227 20:28:08.615982       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:28:38.619292       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4] <==
	I1227 20:28:39.422554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:28:39.430676       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:28:39.430738       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:28:39.432968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:42.888083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:47.149157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:50.748132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:53.802850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:56.824954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:56.831882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:28:56.832090       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:28:56.832283       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-014435_bd966e42-df16-4cba-af02-8db2023e0a1b!
	I1227 20:28:56.833440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f57cc67-39ae-4412-b3d6-f5e4088a0ea3", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-014435_bd966e42-df16-4cba-af02-8db2023e0a1b became leader
	W1227 20:28:56.839793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:56.849441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:28:56.933062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-014435_bd966e42-df16-4cba-af02-8db2023e0a1b!
	W1227 20:28:58.852202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:58.857286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:00.861992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:00.870025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:02.873373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:02.878105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:04.882638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:04.891753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-014435 -n no-preload-014435
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-014435 -n no-preload-014435: exit status 2 (383.525255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-014435 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-014435
helpers_test.go:244: (dbg) docker inspect no-preload-014435:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091",
	        "Created": "2025-12-27T20:26:44.562734517Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329749,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:27:58.669801555Z",
	            "FinishedAt": "2025-12-27T20:27:57.667942615Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/hosts",
	        "LogPath": "/var/lib/docker/containers/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091/8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091-json.log",
	        "Name": "/no-preload-014435",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-014435:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-014435",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d514d0c2855a01d46c86888a6d5e056ab094bb969dd5844893ce45192dbf091",
	                "LowerDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373/merged",
	                "UpperDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373/diff",
	                "WorkDir": "/var/lib/docker/overlay2/114d17e71bca22dd5824b5f17afeb5d5341158842a7d6bb4e59223bea0882373/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-014435",
	                "Source": "/var/lib/docker/volumes/no-preload-014435/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-014435",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-014435",
	                "name.minikube.sigs.k8s.io": "no-preload-014435",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a4adaeaae06cff8ec29ec07cd00d06c6c44ccd16fdf2c795372c00fb52115742",
	            "SandboxKey": "/var/run/docker/netns/a4adaeaae06c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-014435": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da47a33f1df0e45ac0871af30769ae1b8230bf0f77cd43d071316f15c5ec0145",
	                    "EndpointID": "4115876f23751fee1d7adc732e225b724b5e3af60589ff142ba4fc8783a35e37",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "be:1b:28:d6:66:f8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-014435",
	                        "8d514d0c2855"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014435 -n no-preload-014435
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014435 -n no-preload-014435: exit status 2 (401.098363ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-014435 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-014435 logs -n 25: (1.331698905s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-436655 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ ssh     │ -p bridge-436655 sudo crio config                                                                                                                                                                                                             │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ delete  │ -p bridge-436655                                                                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ delete  │ -p disable-driver-mounts-541137                                                                                                                                                                                                               │ disable-driver-mounts-541137 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ stop    │ -p no-preload-014435 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p no-preload-014435 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p embed-certs-820583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-954154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-820583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-954154 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ image   │ old-k8s-version-762177 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ pause   │ -p old-k8s-version-762177 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-954154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ image   │ no-preload-014435 image list --format=json                                                                                                                                                                                                    │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p no-preload-014435 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:28:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:28:48.500169  340625 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:28:48.500408  340625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:48.500418  340625 out.go:374] Setting ErrFile to fd 2...
	I1227 20:28:48.500422  340625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:48.500700  340625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:28:48.501196  340625 out.go:368] Setting JSON to false
	I1227 20:28:48.502349  340625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4277,"bootTime":1766863051,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:28:48.502402  340625 start.go:143] virtualization: kvm guest
	I1227 20:28:48.504445  340625 out.go:179] * [newest-cni-307728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:28:48.506067  340625 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:28:48.506073  340625 notify.go:221] Checking for updates...
	I1227 20:28:48.507389  340625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:28:48.510117  340625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:48.511411  340625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:28:48.516227  340625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:28:48.520113  340625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:28:48.522736  340625 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:48.522908  340625 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:48.523079  340625 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:48.523223  340625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:28:48.555608  340625 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:28:48.555757  340625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:48.625448  340625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:28:48.613118826 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:48.625559  340625 docker.go:319] overlay module found
	I1227 20:28:48.627785  340625 out.go:179] * Using the docker driver based on user configuration
	I1227 20:28:48.628870  340625 start.go:309] selected driver: docker
	I1227 20:28:48.628893  340625 start.go:928] validating driver "docker" against <nil>
	I1227 20:28:48.628904  340625 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:28:48.629485  340625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:48.682637  340625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:28:48.673679788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:48.682799  340625 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1227 20:28:48.682830  340625 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1227 20:28:48.683062  340625 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:28:48.684798  340625 out.go:179] * Using Docker driver with root privileges
	I1227 20:28:48.685773  340625 cni.go:84] Creating CNI manager for ""
	I1227 20:28:48.685860  340625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:48.685876  340625 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:28:48.685963  340625 start.go:353] cluster config:
	{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:48.687269  340625 out.go:179] * Starting "newest-cni-307728" primary control-plane node in "newest-cni-307728" cluster
	I1227 20:28:48.688261  340625 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:28:48.689286  340625 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:28:48.690196  340625 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:48.690233  340625 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:28:48.690256  340625 cache.go:65] Caching tarball of preloaded images
	I1227 20:28:48.690277  340625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:28:48.690345  340625 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:28:48.690356  340625 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:28:48.690441  340625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:28:48.690458  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json: {Name:mke21830f72797f51981ebb2ed1e325363bf8b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:48.710101  340625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:28:48.710116  340625 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:28:48.710130  340625 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:28:48.710159  340625 start.go:360] acquireMachinesLock for newest-cni-307728: {Name:mk68119d6288f8bd1ffbe980f508592c691efe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:28:48.710240  340625 start.go:364] duration metric: took 67.403µs to acquireMachinesLock for "newest-cni-307728"
	I1227 20:28:48.710260  340625 start.go:93] Provisioning new machine with config: &{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:48.710333  340625 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:28:48.407162  329454 pod_ready.go:83] waiting for pod "kube-proxy-ctvzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:48.806633  329454 pod_ready.go:94] pod "kube-proxy-ctvzq" is "Ready"
	I1227 20:28:48.806662  329454 pod_ready.go:86] duration metric: took 399.473531ms for pod "kube-proxy-ctvzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:49.008047  329454 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:49.407573  329454 pod_ready.go:94] pod "kube-scheduler-no-preload-014435" is "Ready"
	I1227 20:28:49.407604  329454 pod_ready.go:86] duration metric: took 399.528497ms for pod "kube-scheduler-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:49.407621  329454 pod_ready.go:40] duration metric: took 39.908277209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:49.460861  329454 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:28:49.462607  329454 out.go:179] * Done! kubectl is now configured to use "no-preload-014435" cluster and "default" namespace by default
	I1227 20:28:47.861047  340025 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-954154" ...
	I1227 20:28:47.861132  340025 cli_runner.go:164] Run: docker start default-k8s-diff-port-954154
	I1227 20:28:48.268150  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:48.287351  340025 kic.go:430] container "default-k8s-diff-port-954154" state is running.
	I1227 20:28:48.287786  340025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-954154
	I1227 20:28:48.306988  340025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/config.json ...
	I1227 20:28:48.307243  340025 machine.go:94] provisionDockerMachine start ...
	I1227 20:28:48.307328  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:48.327277  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:48.327553  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:48.327574  340025 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:28:48.328160  340025 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45072->127.0.0.1:33123: read: connection reset by peer
	I1227 20:28:51.449690  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-954154
	
	I1227 20:28:51.449714  340025 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-954154"
	I1227 20:28:51.449773  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:51.468688  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:51.468993  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:51.469013  340025 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-954154 && echo "default-k8s-diff-port-954154" | sudo tee /etc/hostname
	I1227 20:28:51.599535  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-954154
	
	I1227 20:28:51.599621  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:51.617442  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:51.617738  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:51.617772  340025 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-954154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-954154/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-954154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:51.741706  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:51.741734  340025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:51.741756  340025 ubuntu.go:190] setting up certificates
	I1227 20:28:51.741775  340025 provision.go:84] configureAuth start
	I1227 20:28:51.741846  340025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-954154
	I1227 20:28:51.759749  340025 provision.go:143] copyHostCerts
	I1227 20:28:51.759817  340025 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:51.759836  340025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:51.759925  340025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:51.760058  340025 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:51.760071  340025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:51.760106  340025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:51.760260  340025 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:51.760273  340025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:51.760304  340025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:51.760381  340025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-954154 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-954154 localhost minikube]
	I1227 20:28:51.832661  340025 provision.go:177] copyRemoteCerts
	I1227 20:28:51.832729  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:51.832777  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:51.851087  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:51.942212  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:51.959450  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 20:28:51.975995  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:28:51.992950  340025 provision.go:87] duration metric: took 251.152964ms to configureAuth
	I1227 20:28:51.992976  340025 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:51.993158  340025 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:51.993315  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:52.011400  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:52.011631  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:52.011652  340025 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1227 20:28:48.641299  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:28:51.141968  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:28:48.711836  340625 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:28:48.712071  340625 start.go:159] libmachine.API.Create for "newest-cni-307728" (driver="docker")
	I1227 20:28:48.712103  340625 client.go:173] LocalClient.Create starting
	I1227 20:28:48.712145  340625 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem
	I1227 20:28:48.712175  340625 main.go:144] libmachine: Decoding PEM data...
	I1227 20:28:48.712193  340625 main.go:144] libmachine: Parsing certificate...
	I1227 20:28:48.712238  340625 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem
	I1227 20:28:48.712255  340625 main.go:144] libmachine: Decoding PEM data...
	I1227 20:28:48.712265  340625 main.go:144] libmachine: Parsing certificate...
	I1227 20:28:48.712589  340625 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:28:48.727613  340625 cli_runner.go:211] docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:28:48.727667  340625 network_create.go:284] running [docker network inspect newest-cni-307728] to gather additional debugging logs...
	I1227 20:28:48.727682  340625 cli_runner.go:164] Run: docker network inspect newest-cni-307728
	W1227 20:28:48.743879  340625 cli_runner.go:211] docker network inspect newest-cni-307728 returned with exit code 1
	I1227 20:28:48.743905  340625 network_create.go:287] error running [docker network inspect newest-cni-307728]: docker network inspect newest-cni-307728: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-307728 not found
	I1227 20:28:48.743927  340625 network_create.go:289] output of [docker network inspect newest-cni-307728]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-307728 not found
	
	** /stderr **
	I1227 20:28:48.744059  340625 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:48.760635  340625 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0db0ba8938bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:24:76:f1:9a:26} reservation:<nil>}
	I1227 20:28:48.761253  340625 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11f8d597a005 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:b4:6c:7e:ff:91} reservation:<nil>}
	I1227 20:28:48.762075  340625 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7cf3350a110 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:14:0b:19:b4:4d} reservation:<nil>}
	I1227 20:28:48.762703  340625 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-df613bfb14c3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:e7:81:22:a5:aa} reservation:<nil>}
	I1227 20:28:48.763398  340625 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8bb8ec9ff71c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:ba:45:ee:97:15} reservation:<nil>}
	I1227 20:28:48.763977  340625 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-da47a33f1df0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:b6:9e:57:b1:b3:31} reservation:<nil>}
	I1227 20:28:48.764830  340625 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee9150}
	I1227 20:28:48.764849  340625 network_create.go:124] attempt to create docker network newest-cni-307728 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1227 20:28:48.764905  340625 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-307728 newest-cni-307728
	I1227 20:28:48.812651  340625 network_create.go:108] docker network newest-cni-307728 192.168.103.0/24 created
	I1227 20:28:48.812678  340625 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-307728" container
	I1227 20:28:48.812754  340625 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:28:48.829760  340625 cli_runner.go:164] Run: docker volume create newest-cni-307728 --label name.minikube.sigs.k8s.io=newest-cni-307728 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:28:48.846811  340625 oci.go:103] Successfully created a docker volume newest-cni-307728
	I1227 20:28:48.846879  340625 cli_runner.go:164] Run: docker run --rm --name newest-cni-307728-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-307728 --entrypoint /usr/bin/test -v newest-cni-307728:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:28:49.251301  340625 oci.go:107] Successfully prepared a docker volume newest-cni-307728
	I1227 20:28:49.251356  340625 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:49.251371  340625 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:28:49.251443  340625 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-307728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:28:53.048259  340625 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-307728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.796770201s)
	I1227 20:28:53.048298  340625 kic.go:203] duration metric: took 3.796923553s to extract preloaded images to volume ...
	W1227 20:28:53.048388  340625 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 20:28:53.048428  340625 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 20:28:53.048478  340625 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:28:53.106204  340625 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-307728 --name newest-cni-307728 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-307728 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-307728 --network newest-cni-307728 --ip 192.168.103.2 --volume newest-cni-307728:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:28:53.377715  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Running}}
	I1227 20:28:53.396903  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:28:53.419741  340625 cli_runner.go:164] Run: docker exec newest-cni-307728 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:28:53.470424  340625 oci.go:144] the created container "newest-cni-307728" has a running status.
	I1227 20:28:53.470467  340625 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa...
	I1227 20:28:53.122891  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:53.122939  340025 machine.go:97] duration metric: took 4.815676959s to provisionDockerMachine
	I1227 20:28:53.122954  340025 start.go:293] postStartSetup for "default-k8s-diff-port-954154" (driver="docker")
	I1227 20:28:53.122967  340025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:53.123032  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:53.123077  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.143650  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.241428  340025 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:53.245431  340025 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:53.245463  340025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:53.245476  340025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:53.245527  340025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:53.245638  340025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:53.245750  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:53.259754  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:53.277646  340025 start.go:296] duration metric: took 154.680132ms for postStartSetup
	I1227 20:28:53.277719  340025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:53.277784  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.296596  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.388191  340025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:53.393370  340025 fix.go:56] duration metric: took 5.55357325s for fixHost
	I1227 20:28:53.393399  340025 start.go:83] releasing machines lock for "default-k8s-diff-port-954154", held for 5.5536385s
	I1227 20:28:53.393469  340025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-954154
	I1227 20:28:53.414844  340025 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:53.414940  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.414964  340025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:53.415054  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.439070  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.441062  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.530425  340025 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:53.593039  340025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:28:53.635364  340025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:53.640899  340025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:53.641235  340025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:53.650333  340025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:28:53.650354  340025 start.go:496] detecting cgroup driver to use...
	I1227 20:28:53.650397  340025 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:53.650439  340025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:53.668529  340025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:53.689731  340025 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:53.689800  340025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:53.709961  340025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:53.727001  340025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:53.832834  340025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:53.922310  340025 docker.go:234] disabling docker service ...
	I1227 20:28:53.922365  340025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:53.936501  340025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:53.950162  340025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:54.041512  340025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:54.134155  340025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:54.147385  340025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:54.161720  340025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:54.161796  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.171044  340025 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:54.171106  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.180065  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.189150  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.197442  340025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:54.205514  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.213985  340025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.222017  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.230077  340025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:54.237131  340025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:54.244100  340025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:54.328674  340025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:54.465250  340025 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:54.465333  340025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:54.469352  340025 start.go:574] Will wait 60s for crictl version
	I1227 20:28:54.469401  340025 ssh_runner.go:195] Run: which crictl
	I1227 20:28:54.472893  340025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:54.498891  340025 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:54.498989  340025 ssh_runner.go:195] Run: crio --version
	I1227 20:28:54.526206  340025 ssh_runner.go:195] Run: crio --version
	I1227 20:28:54.556148  340025 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:54.557374  340025 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-954154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:54.575049  340025 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:54.578875  340025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:54.588766  340025 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-954154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-954154 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:54.588870  340025 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:54.588927  340025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:54.619011  340025 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:54.619030  340025 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:28:54.619069  340025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:54.646154  340025 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:54.646177  340025 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:54.646185  340025 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I1227 20:28:54.646334  340025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-954154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-954154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:54.646422  340025 ssh_runner.go:195] Run: crio config
	I1227 20:28:54.692232  340025 cni.go:84] Creating CNI manager for ""
	I1227 20:28:54.692253  340025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:54.692268  340025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:28:54.692305  340025 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-954154 NodeName:default-k8s-diff-port-954154 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:54.692423  340025 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-954154"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:54.692483  340025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:54.700975  340025 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:54.701056  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:54.709484  340025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 20:28:54.722400  340025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:54.735438  340025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1227 20:28:54.747514  340025 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:54.751059  340025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:54.761269  340025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:54.842277  340025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:54.869119  340025 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154 for IP: 192.168.85.2
	I1227 20:28:54.869143  340025 certs.go:195] generating shared ca certs ...
	I1227 20:28:54.869164  340025 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:54.869322  340025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:54.869377  340025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:54.869391  340025 certs.go:257] generating profile certs ...
	I1227 20:28:54.869519  340025 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/client.key
	I1227 20:28:54.869600  340025 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/apiserver.key.b37aaa7a
	I1227 20:28:54.869654  340025 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/proxy-client.key
	I1227 20:28:54.869797  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:54.869837  340025 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:54.869849  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:54.869881  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:54.869933  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:54.869976  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:54.870034  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:54.870823  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:54.889499  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:54.908467  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:54.928722  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:54.956319  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:28:54.976184  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:28:54.992715  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:55.009591  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:28:55.025543  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:55.042531  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:55.061224  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:55.081310  340025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:55.095303  340025 ssh_runner.go:195] Run: openssl version
	I1227 20:28:55.101512  340025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.109364  340025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:55.117062  340025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.120521  340025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.120562  340025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.156522  340025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:55.163769  340025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.170984  340025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:55.178467  340025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.182664  340025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.182714  340025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.216669  340025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:55.224508  340025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.231727  340025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:55.240655  340025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.244863  340025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.244927  340025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.281470  340025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:55.288784  340025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:55.292510  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:28:55.333080  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:28:55.369784  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:28:55.424693  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:28:55.475758  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:28:55.533819  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:28:55.591758  340025 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-954154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-954154 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:55.591848  340025 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:55.591890  340025 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:55.627989  340025 cri.go:96] found id: "5931439a8c0a571b5456aa8ff89ec7efa4a328c297d4825276b5ce8da13b99a6"
	I1227 20:28:55.628014  340025 cri.go:96] found id: "706a22c5fabaf1ee5036f9f7e94b4c7a02acaeb2002009e6416b6682929512f2"
	I1227 20:28:55.628020  340025 cri.go:96] found id: "0959afbe1a9958167e4657feae2ee275c220e183182d01f3bfb3c248002f75ad"
	I1227 20:28:55.628027  340025 cri.go:96] found id: "8b0860861ddb032e4078fd3575653ab5f5a77cc25e17d9ecb7910330acbef6e8"
	I1227 20:28:55.628032  340025 cri.go:96] found id: ""
	I1227 20:28:55.628077  340025 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:28:55.642876  340025 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:55Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:28:55.642973  340025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:55.652554  340025 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:28:55.652578  340025 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:28:55.652625  340025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:28:55.660979  340025 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:28:55.662107  340025 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-954154" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:55.662856  340025 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-954154" cluster setting kubeconfig missing "default-k8s-diff-port-954154" context setting]
	I1227 20:28:55.664153  340025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:55.666338  340025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:28:55.676564  340025 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 20:28:55.676593  340025 kubeadm.go:602] duration metric: took 24.008347ms to restartPrimaryControlPlane
	I1227 20:28:55.676602  340025 kubeadm.go:403] duration metric: took 84.854268ms to StartCluster
	I1227 20:28:55.676617  340025 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:55.676673  340025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:55.678946  340025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:55.679180  340025 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:55.679553  340025 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:55.679619  340025 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:28:55.679775  340025 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:55.679791  340025 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-954154"
	W1227 20:28:55.679799  340025 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:28:55.679823  340025 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:55.679928  340025 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:55.679956  340025 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-954154"
	W1227 20:28:55.679964  340025 addons.go:248] addon dashboard should already be in state true
	I1227 20:28:55.679991  340025 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:55.680547  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.680638  340025 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:55.680657  340025 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-954154"
	I1227 20:28:55.681186  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.683393  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.683653  340025 out.go:179] * Verifying Kubernetes components...
	I1227 20:28:55.684780  340025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:55.714633  340025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:28:55.716852  340025 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-954154"
	W1227 20:28:55.716877  340025 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:28:55.716906  340025 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:55.717089  340025 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:28:55.717135  340025 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:55.717147  340025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:28:55.717215  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:55.717777  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.722759  340025 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:28:53.664479  340625 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:28:53.699126  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:28:53.720716  340625 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:28:53.720739  340625 kic_runner.go:114] Args: [docker exec --privileged newest-cni-307728 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:28:53.774092  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:28:53.795085  340625 machine.go:94] provisionDockerMachine start ...
	I1227 20:28:53.795200  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:53.815121  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:53.815367  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:53.815380  340625 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:28:53.946421  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:28:53.946449  340625 ubuntu.go:182] provisioning hostname "newest-cni-307728"
	I1227 20:28:53.946514  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:53.967479  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:53.967688  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:53.967701  340625 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-307728 && echo "newest-cni-307728" | sudo tee /etc/hostname
	I1227 20:28:54.109706  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:28:54.109778  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.129736  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:54.129958  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:54.129980  340625 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307728/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:54.255088  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:54.255111  340625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:54.255160  340625 ubuntu.go:190] setting up certificates
	I1227 20:28:54.255172  340625 provision.go:84] configureAuth start
	I1227 20:28:54.255217  340625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:28:54.276936  340625 provision.go:143] copyHostCerts
	I1227 20:28:54.276997  340625 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:54.277008  340625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:54.277094  340625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:54.277219  340625 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:54.277228  340625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:54.277279  340625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:54.277365  340625 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:54.277372  340625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:54.277407  340625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:54.277482  340625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-307728 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-307728]
	I1227 20:28:54.307332  340625 provision.go:177] copyRemoteCerts
	I1227 20:28:54.307382  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:54.307415  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.325258  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:54.419033  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:54.438154  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:28:54.455050  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:28:54.472709  340625 provision.go:87] duration metric: took 217.519219ms to configureAuth
	I1227 20:28:54.472736  340625 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:54.472956  340625 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:54.473073  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.492336  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:54.492642  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:54.492669  340625 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:28:54.753361  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:54.753389  340625 machine.go:97] duration metric: took 958.279107ms to provisionDockerMachine
	I1227 20:28:54.753401  340625 client.go:176] duration metric: took 6.041292407s to LocalClient.Create
	I1227 20:28:54.753424  340625 start.go:167] duration metric: took 6.041353878s to libmachine.API.Create "newest-cni-307728"
	I1227 20:28:54.753439  340625 start.go:293] postStartSetup for "newest-cni-307728" (driver="docker")
	I1227 20:28:54.753451  340625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:54.753523  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:54.753568  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.772791  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:54.870458  340625 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:54.874573  340625 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:54.874605  340625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:54.874618  340625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:54.874671  340625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:54.874756  340625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:54.874874  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:54.883036  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:54.906634  340625 start.go:296] duration metric: took 153.179795ms for postStartSetup
	I1227 20:28:54.907029  340625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:28:54.928933  340625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:28:54.929249  340625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:54.929300  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.954691  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:55.044982  340625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:55.049336  340625 start.go:128] duration metric: took 6.338989786s to createHost
	I1227 20:28:55.049357  340625 start.go:83] releasing machines lock for "newest-cni-307728", held for 6.339107658s
	I1227 20:28:55.049418  340625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:28:55.070462  340625 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:55.070526  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:55.070556  340625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:55.070631  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:55.089304  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:55.090352  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:55.233933  340625 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:55.241758  340625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:28:55.275894  340625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:55.280648  340625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:55.280715  340625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:55.307733  340625 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 20:28:55.307753  340625 start.go:496] detecting cgroup driver to use...
	I1227 20:28:55.307785  340625 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:55.307839  340625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:55.323192  340625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:55.335205  340625 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:55.335265  340625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:55.351180  340625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:55.369175  340625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:55.473778  340625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:55.602420  340625 docker.go:234] disabling docker service ...
	I1227 20:28:55.602482  340625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:55.625550  340625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:55.643841  340625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:55.802566  340625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:55.918642  340625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:55.936192  340625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:55.955225  340625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:55.955288  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:55.966672  340625 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:55.966742  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:55.978502  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:55.989239  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.000177  340625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:56.009564  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.022264  340625 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.037345  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.049451  340625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:56.056993  340625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:56.064796  340625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:56.162614  340625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:56.313255  340625 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:56.313324  340625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:56.317889  340625 start.go:574] Will wait 60s for crictl version
	I1227 20:28:56.317981  340625 ssh_runner.go:195] Run: which crictl
	I1227 20:28:56.322051  340625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:56.349449  340625 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:56.349532  340625 ssh_runner.go:195] Run: crio --version
	I1227 20:28:56.382000  340625 ssh_runner.go:195] Run: crio --version
	I1227 20:28:56.413048  340625 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:56.414278  340625 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:56.433453  340625 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:56.437559  340625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:56.449949  340625 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:28:55.724000  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:28:55.724016  340025 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:28:55.724065  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:55.744982  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:55.748159  340025 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:55.748180  340025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:28:55.748239  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:55.754480  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:55.778258  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:55.867117  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:28:55.867140  340025 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:28:55.872461  340025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:55.875196  340025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:55.883874  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:28:55.883895  340025 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:28:55.887443  340025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:55.901123  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:28:55.901148  340025 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:28:55.918460  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:28:55.918485  340025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:28:55.937315  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:28:55.937335  340025 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:28:55.952528  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:28:55.952556  340025 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:28:55.967852  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:28:55.967875  340025 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:28:55.984392  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:28:55.984418  340025 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:28:56.000591  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:56.000616  340025 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:28:56.014356  340025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:57.527657  340025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.655168953s)
	I1227 20:28:57.527711  340025 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.652481169s)
	I1227 20:28:57.527762  340025 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-954154" to be "Ready" ...
	I1227 20:28:57.527787  340025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.513395036s)
	I1227 20:28:57.527723  340025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.640259424s)
	I1227 20:28:57.529812  340025 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-954154 addons enable metrics-server
	
	I1227 20:28:57.536501  340025 node_ready.go:49] node "default-k8s-diff-port-954154" is "Ready"
	I1227 20:28:57.536525  340025 node_ready.go:38] duration metric: took 8.726968ms for node "default-k8s-diff-port-954154" to be "Ready" ...
	I1227 20:28:57.536540  340025 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:28:57.536581  340025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:28:57.541048  340025 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:28:57.542123  340025 addons.go:530] duration metric: took 1.862504727s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:28:57.549333  340025 api_server.go:72] duration metric: took 1.870126325s to wait for apiserver process to appear ...
	I1227 20:28:57.549353  340025 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:28:57.549370  340025 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:57.553748  340025 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:57.553768  340025 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:56.450940  340625 kubeadm.go:884] updating cluster {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:56.451057  340625 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:56.451105  340625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:56.486578  340625 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:56.486604  340625 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:28:56.486659  340625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:56.516779  340625 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:56.516806  340625 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:56.516814  340625 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I1227 20:28:56.516942  340625 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-307728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:56.517034  340625 ssh_runner.go:195] Run: crio config
	I1227 20:28:56.564462  340625 cni.go:84] Creating CNI manager for ""
	I1227 20:28:56.564481  340625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:56.564497  340625 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:28:56.564520  340625 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-307728 NodeName:newest-cni-307728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:56.564660  340625 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-307728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:56.564717  340625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:56.574206  340625 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:56.574276  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:56.582079  340625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:28:56.600287  340625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:56.616380  340625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:28:56.629039  340625 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:56.632734  340625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:56.643610  340625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:56.731167  340625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:56.767503  340625 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728 for IP: 192.168.103.2
	I1227 20:28:56.767525  340625 certs.go:195] generating shared ca certs ...
	I1227 20:28:56.767558  340625 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.767733  340625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:56.767803  340625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:56.767817  340625 certs.go:257] generating profile certs ...
	I1227 20:28:56.767890  340625 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key
	I1227 20:28:56.767942  340625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.crt with IP's: []
	I1227 20:28:56.794375  340625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.crt ...
	I1227 20:28:56.794408  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.crt: {Name:mkbe31918a2628f8309a18a3c482be7f59d5e510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.794621  340625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key ...
	I1227 20:28:56.794636  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key: {Name:mkbc3d519f763199b338bf70577fc2817f7c4332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.794741  340625 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df
	I1227 20:28:56.794772  340625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1227 20:28:56.879148  340625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df ...
	I1227 20:28:56.879178  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df: {Name:mk64269dd374c740149f7faf9e729189e8331f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.879382  340625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df ...
	I1227 20:28:56.879400  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df: {Name:mkc2c754a6d53e33d9862453e662ca2209e188d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.879503  340625 certs.go:382] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt
	I1227 20:28:56.879600  340625 certs.go:386] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key
	I1227 20:28:56.879659  340625 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key
	I1227 20:28:56.879674  340625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt with IP's: []
	I1227 20:28:56.951167  340625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt ...
	I1227 20:28:56.951204  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt: {Name:mk61de4f8eabcfb14024a7f87b814c37a2ed9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.951385  340625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key ...
	I1227 20:28:56.951404  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key: {Name:mk921c81a121096b317f7cf3e18e26665afa5455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.951654  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:56.951708  340625 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:56.951725  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:56.951762  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:56.951794  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:56.951828  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:56.951885  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:56.952685  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:56.989260  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:57.016199  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:57.038448  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:57.063231  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:28:57.083056  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:28:57.103361  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:57.124895  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:28:57.146997  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:57.168985  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:57.192337  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:57.212648  340625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:57.226053  340625 ssh_runner.go:195] Run: openssl version
	I1227 20:28:57.232690  340625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.240634  340625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:57.248278  340625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.253026  340625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.253083  340625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.293170  340625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:57.302311  340625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:28:57.310221  340625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.319835  340625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:57.328517  340625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.333508  340625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.333570  340625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.385727  340625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:57.395544  340625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14427.pem /etc/ssl/certs/51391683.0
	I1227 20:28:57.405387  340625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.414374  340625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:57.422682  340625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.426727  340625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.426781  340625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.468027  340625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:57.475579  340625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/144272.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:57.482669  340625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:57.486989  340625 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:28:57.487049  340625 kubeadm.go:401] StartCluster: {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:57.487127  340625 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:57.487176  340625 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:57.526113  340625 cri.go:96] found id: ""
	I1227 20:28:57.526185  340625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:57.535676  340625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:28:57.544346  340625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:28:57.544400  340625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:28:57.552362  340625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:28:57.552381  340625 kubeadm.go:158] found existing configuration files:
	
	I1227 20:28:57.552419  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:28:57.559859  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:28:57.559901  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:28:57.566894  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:28:57.574224  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:28:57.574271  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:28:57.581383  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:28:57.588654  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:28:57.588689  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:28:57.595675  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:28:57.603162  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:28:57.603207  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:28:57.610120  340625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:28:57.651578  340625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:28:57.651650  340625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:28:57.717226  340625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:28:57.717315  340625 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 20:28:57.717358  340625 kubeadm.go:319] OS: Linux
	I1227 20:28:57.717448  340625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:28:57.717519  340625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:28:57.717567  340625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:28:57.717647  340625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:28:57.717733  340625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:28:57.717812  340625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:28:57.717923  340625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:28:57.717998  340625 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 20:28:57.774331  340625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:28:57.774452  340625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:28:57.774590  340625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:28:57.781865  340625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1227 20:28:53.641780  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:28:56.141575  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:28:57.784248  340625 out.go:252]   - Generating certificates and keys ...
	I1227 20:28:57.784354  340625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:28:57.784471  340625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:28:57.800338  340625 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:28:57.829651  340625 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:28:57.870093  340625 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:28:58.023851  340625 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:28:58.175326  340625 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:28:58.175458  340625 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-307728] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1227 20:28:58.227767  340625 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:28:58.227948  340625 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-307728] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1227 20:28:58.327146  340625 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:28:58.413976  340625 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:28:58.519514  340625 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:28:58.519622  340625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:28:58.602374  340625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:28:58.658792  340625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:28:58.828754  340625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:28:58.899131  340625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:28:58.981756  340625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:28:58.982297  340625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:28:58.986398  340625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:28:58.050409  340025 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:58.055041  340025 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:58.055071  340025 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:58.549732  340025 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:58.554808  340025 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1227 20:28:58.555845  340025 api_server.go:141] control plane version: v1.35.0
	I1227 20:28:58.555884  340025 api_server.go:131] duration metric: took 1.006522468s to wait for apiserver health ...
	I1227 20:28:58.555894  340025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:28:58.607197  340025 system_pods.go:59] 8 kube-system pods found
	I1227 20:28:58.607235  340025 system_pods.go:61] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:58.607245  340025 system_pods.go:61] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:58.607258  340025 system_pods.go:61] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:28:58.607263  340025 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:58.607273  340025 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:58.607281  340025 system_pods.go:61] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:28:58.607286  340025 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:58.607292  340025 system_pods.go:61] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:28:58.607299  340025 system_pods.go:74] duration metric: took 51.39957ms to wait for pod list to return data ...
	I1227 20:28:58.607309  340025 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:28:58.609698  340025 default_sa.go:45] found service account: "default"
	I1227 20:28:58.609718  340025 default_sa.go:55] duration metric: took 2.396384ms for default service account to be created ...
	I1227 20:28:58.609726  340025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:28:58.612207  340025 system_pods.go:86] 8 kube-system pods found
	I1227 20:28:58.612229  340025 system_pods.go:89] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:58.612237  340025 system_pods.go:89] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:58.612250  340025 system_pods.go:89] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:28:58.612256  340025 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:58.612266  340025 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:58.612271  340025 system_pods.go:89] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:28:58.612282  340025 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:58.612294  340025 system_pods.go:89] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:28:58.612305  340025 system_pods.go:126] duration metric: took 2.569534ms to wait for k8s-apps to be running ...
	I1227 20:28:58.612315  340025 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:28:58.612351  340025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:58.624877  340025 system_svc.go:56] duration metric: took 12.557367ms WaitForService to wait for kubelet
	I1227 20:28:58.624898  340025 kubeadm.go:587] duration metric: took 2.945693199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:28:58.624959  340025 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:28:58.627235  340025 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:28:58.627258  340025 node_conditions.go:123] node cpu capacity is 8
	I1227 20:28:58.627273  340025 node_conditions.go:105] duration metric: took 2.308686ms to run NodePressure ...
	I1227 20:28:58.627296  340025 start.go:242] waiting for startup goroutines ...
	I1227 20:28:58.627310  340025 start.go:247] waiting for cluster config update ...
	I1227 20:28:58.627328  340025 start.go:256] writing updated cluster config ...
	I1227 20:28:58.627581  340025 ssh_runner.go:195] Run: rm -f paused
	I1227 20:28:58.631443  340025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:58.634602  340025 pod_ready.go:83] waiting for pod "coredns-7d764666f9-gtzdb" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:29:00.640993  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:28:58.641325  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:29:00.641748  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:29:02.647042  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:28:58.987788  340625 out.go:252]   - Booting up control plane ...
	I1227 20:28:58.987909  340625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:28:58.988350  340625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:28:58.991232  340625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:28:59.009829  340625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:28:59.010080  340625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:28:59.018540  340625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:28:59.018939  340625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:28:59.019013  340625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:28:59.122102  340625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:28:59.122243  340625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:28:59.623869  340625 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.822281ms
	I1227 20:28:59.626698  340625 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:28:59.626835  340625 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1227 20:28:59.626991  340625 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:28:59.627081  340625 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:29:00.132655  340625 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.729406ms
	I1227 20:29:01.422270  340625 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.79552906s
	I1227 20:29:03.128834  340625 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502046336s
	I1227 20:29:03.152511  340625 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:29:03.162776  340625 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:29:03.172127  340625 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:29:03.172413  340625 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-307728 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:29:03.183365  340625 kubeadm.go:319] [bootstrap-token] Using token: m3fv2a.3hy2dotriyukxsjh
	I1227 20:29:03.184664  340625 out.go:252]   - Configuring RBAC rules ...
	I1227 20:29:03.184815  340625 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:29:03.188315  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:29:03.194363  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:29:03.196969  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:29:03.199431  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:29:03.201765  340625 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:29:03.538642  340625 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:29:03.963285  340625 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:29:04.536423  340625 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:29:04.536448  340625 kubeadm.go:319] 
	I1227 20:29:04.536526  340625 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:29:04.536532  340625 kubeadm.go:319] 
	I1227 20:29:04.536632  340625 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:29:04.536638  340625 kubeadm.go:319] 
	I1227 20:29:04.536668  340625 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:29:04.536741  340625 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:29:04.536814  340625 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:29:04.536821  340625 kubeadm.go:319] 
	I1227 20:29:04.536887  340625 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:29:04.536933  340625 kubeadm.go:319] 
	I1227 20:29:04.537020  340625 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:29:04.537030  340625 kubeadm.go:319] 
	I1227 20:29:04.537108  340625 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:29:04.537214  340625 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:29:04.537310  340625 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:29:04.537320  340625 kubeadm.go:319] 
	I1227 20:29:04.537447  340625 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:29:04.537551  340625 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:29:04.537565  340625 kubeadm.go:319] 
	I1227 20:29:04.537685  340625 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token m3fv2a.3hy2dotriyukxsjh \
	I1227 20:29:04.537816  340625 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 \
	I1227 20:29:04.537849  340625 kubeadm.go:319] 	--control-plane 
	I1227 20:29:04.537858  340625 kubeadm.go:319] 
	I1227 20:29:04.537990  340625 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:29:04.538002  340625 kubeadm.go:319] 
	I1227 20:29:04.538113  340625 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token m3fv2a.3hy2dotriyukxsjh \
	I1227 20:29:04.538240  340625 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 
	I1227 20:29:04.541383  340625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 20:29:04.541573  340625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:29:04.541611  340625 cni.go:84] Creating CNI manager for ""
	I1227 20:29:04.541626  340625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:04.544110  340625 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 27 20:28:25 no-preload-014435 crio[566]: time="2025-12-27T20:28:25.27837067Z" level=info msg="Started container" PID=1742 containerID=9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper id=68599563-4452-42f3-b7d8-d4829f7f5bdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=f578af9f5f9f446bcedf8fa5800ed8103b745a26295e3dc9548bbb50c7f6fdea
	Dec 27 20:28:25 no-preload-014435 crio[566]: time="2025-12-27T20:28:25.335283029Z" level=info msg="Removing container: c06e8f182bdafe6d8a9f38a86e7345156bb2efd9ed93ead9991c0833628623c8" id=3045db0d-d603-4e9d-bd3d-8d585e1caeb2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:25 no-preload-014435 crio[566]: time="2025-12-27T20:28:25.345049792Z" level=info msg="Removed container c06e8f182bdafe6d8a9f38a86e7345156bb2efd9ed93ead9991c0833628623c8: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper" id=3045db0d-d603-4e9d-bd3d-8d585e1caeb2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.371022621Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5160c85f-dba0-42b3-9466-3d1c44075904 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.371905487Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ed99fd5e-8d38-463e-a051-c77311a00a05 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.373100296Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b79d63eb-ae4c-4b5d-aca5-ac82dd01195d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.373228533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.378326615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.378463493Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1e57e90e881c7ad626db887738d785ea0d0edb965443dca370b6c2fe47a990b8/merged/etc/passwd: no such file or directory"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.37848681Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1e57e90e881c7ad626db887738d785ea0d0edb965443dca370b6c2fe47a990b8/merged/etc/group: no such file or directory"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.378686234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.406864337Z" level=info msg="Created container c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4: kube-system/storage-provisioner/storage-provisioner" id=b79d63eb-ae4c-4b5d-aca5-ac82dd01195d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.407386694Z" level=info msg="Starting container: c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4" id=0f4888f3-4617-4411-9952-6b74b05e2565 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:39 no-preload-014435 crio[566]: time="2025-12-27T20:28:39.40901011Z" level=info msg="Started container" PID=1758 containerID=c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4 description=kube-system/storage-provisioner/storage-provisioner id=0f4888f3-4617-4411-9952-6b74b05e2565 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c444ff1474e83735b4552bc2bebd9519a65a719c8e71305db4ae2b6e4c9b2502
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.237633314Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=375458a5-7129-4322-af7d-8e4f62071159 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.35679411Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0a831b7a-b9e2-4cc3-8ac3-a39b1d3cbcb5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.372984756Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper" id=f3bf7521-bffe-4804-965c-63d66e069a7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.373109021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.382184183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.382806898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.525339633Z" level=info msg="Created container a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper" id=f3bf7521-bffe-4804-965c-63d66e069a7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.526086801Z" level=info msg="Starting container: a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1" id=0062cd81-7534-428c-8bee-4eae2d49c9a6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:28:52 no-preload-014435 crio[566]: time="2025-12-27T20:28:52.528403789Z" level=info msg="Started container" PID=1797 containerID=a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper id=0062cd81-7534-428c-8bee-4eae2d49c9a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f578af9f5f9f446bcedf8fa5800ed8103b745a26295e3dc9548bbb50c7f6fdea
	Dec 27 20:28:53 no-preload-014435 crio[566]: time="2025-12-27T20:28:53.414807865Z" level=info msg="Removing container: 9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65" id=2d891500-2e8c-4a54-a84c-59d811116b1b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:53 no-preload-014435 crio[566]: time="2025-12-27T20:28:53.429322429Z" level=info msg="Removed container 9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk/dashboard-metrics-scraper" id=2d891500-2e8c-4a54-a84c-59d811116b1b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a67c4e635ce57       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   3                   f578af9f5f9f4       dashboard-metrics-scraper-867fb5f87b-zw6bk   kubernetes-dashboard
	c901e1afa45cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           28 seconds ago       Running             storage-provisioner         1                   c444ff1474e83       storage-provisioner                          kube-system
	0ca092b3ce3ee       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago       Running             kubernetes-dashboard        0                   e703844f36755       kubernetes-dashboard-b84665fb8-v6b7x         kubernetes-dashboard
	b9b509ee6a53f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           58 seconds ago       Running             coredns                     0                   6da3e0d9e4163       coredns-7d764666f9-nvrq6                     kube-system
	0a5cfa6b4d2e8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   a077a2bcd6334       busybox                                      default
	e95e282035952       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           58 seconds ago       Running             kube-proxy                  0                   0c801e912bf19       kube-proxy-ctvzq                             kube-system
	5e16382753ebc       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           58 seconds ago       Running             kindnet-cni                 0                   dea99ef478cce       kindnet-7pgwz                                kube-system
	b9a65772953dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   c444ff1474e83       storage-provisioner                          kube-system
	ccbaca5423134       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           About a minute ago   Running             kube-controller-manager     0                   56eafd8bcdf37       kube-controller-manager-no-preload-014435    kube-system
	7e08e2ae41d9f       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           About a minute ago   Running             kube-scheduler              0                   127f0a21f7bd2       kube-scheduler-no-preload-014435             kube-system
	a18c71b2140b3       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           About a minute ago   Running             kube-apiserver              0                   f8b580cdf6707       kube-apiserver-no-preload-014435             kube-system
	455bcec8a175a       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           About a minute ago   Running             etcd                        0                   5fa2125c56149       etcd-no-preload-014435                       kube-system
	
	
	==> coredns [b9b509ee6a53f3f461dc21f5d20cb2ed21b39cc41369daf17da2bf1e93644530] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47735 - 36198 "HINFO IN 8077970765044174866.8899174393879063754. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069497073s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-014435
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-014435
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=no-preload-014435
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_27_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:27:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-014435
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:28:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:28:38 +0000   Sat, 27 Dec 2025 20:27:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:28:38 +0000   Sat, 27 Dec 2025 20:27:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:28:38 +0000   Sat, 27 Dec 2025 20:27:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:28:38 +0000   Sat, 27 Dec 2025 20:27:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-014435
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                16adf691-8e3a-4b05-b69e-6cb195641c2f
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-7d764666f9-nvrq6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-no-preload-014435                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-7pgwz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-014435              250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-014435     200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-ctvzq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-014435              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-zw6bk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-v6b7x          0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  115s  node-controller  Node no-preload-014435 event: Registered Node no-preload-014435 in Controller
	  Normal  RegisteredNode  57s   node-controller  Node no-preload-014435 event: Registered Node no-preload-014435 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [455bcec8a175a162715bf84ffdfb0f031b741ff15312c1ed41956c3dafb97b6e] <==
	{"level":"info","ts":"2025-12-27T20:28:05.876453Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-27T20:28:05.876652Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:28:06.758777Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:06.758840Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:06.758972Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:06.759005Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:06.759025Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:06.759523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:06.759562Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:06.759582Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:06.759592Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:06.760334Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-014435 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:28:06.760368Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:06.760427Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:06.760541Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:06.760565Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:06.761883Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:06.761871Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:06.765608Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-27T20:28:06.766523Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:28:52.383601Z","caller":"traceutil/trace.go:172","msg":"trace[512481360] linearizableReadLoop","detail":"{readStateIndex:681; appliedIndex:681; }","duration":"139.931957ms","start":"2025-12-27T20:28:52.243640Z","end":"2025-12-27T20:28:52.383572Z","steps":["trace[512481360] 'read index received'  (duration: 139.92184ms)","trace[512481360] 'applied index is now lower than readState.Index'  (duration: 8.63µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-27T20:28:52.383738Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.044729ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-27T20:28:52.383812Z","caller":"traceutil/trace.go:172","msg":"trace[2107378286] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:643; }","duration":"140.168248ms","start":"2025-12-27T20:28:52.243636Z","end":"2025-12-27T20:28:52.383804Z","steps":["trace[2107378286] 'agreement among raft nodes before linearized reading'  (duration: 140.017301ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:28:52.383808Z","caller":"traceutil/trace.go:172","msg":"trace[1165899591] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"142.595378ms","start":"2025-12-27T20:28:52.241198Z","end":"2025-12-27T20:28:52.383793Z","steps":["trace[1165899591] 'process raft request'  (duration: 142.409226ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:28:52.658487Z","caller":"traceutil/trace.go:172","msg":"trace[1031382787] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"128.351024ms","start":"2025-12-27T20:28:52.530120Z","end":"2025-12-27T20:28:52.658471Z","steps":["trace[1031382787] 'process raft request'  (duration: 128.253485ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:29:07 up  1:11,  0 user,  load average: 3.62, 3.24, 2.29
	Linux no-preload-014435 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e16382753ebc8b7372b1654da2876be48c0fdc56c35110cbf3c811d7a0f6ed0] <==
	I1227 20:28:08.859485       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:28:08.859850       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1227 20:28:08.860118       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:28:08.860176       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:28:08.860222       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:28:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:28:09.155755       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:28:09.255477       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:28:09.255690       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:28:09.255946       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:28:09.656007       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:28:09.656040       1 metrics.go:72] Registering metrics
	I1227 20:28:09.656129       1 controller.go:711] "Syncing nftables rules"
	I1227 20:28:19.155291       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:19.155346       1 main.go:301] handling current node
	I1227 20:28:29.155818       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:29.155854       1 main.go:301] handling current node
	I1227 20:28:39.155775       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:39.155813       1 main.go:301] handling current node
	I1227 20:28:49.155162       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:49.155204       1 main.go:301] handling current node
	I1227 20:28:59.156061       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1227 20:28:59.156111       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a18c71b2140b36f980a1c87567071c8701b3e5f8aa4ab2f6fb52beb4656e9bd2] <==
	I1227 20:28:07.834174       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:28:07.834274       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:28:07.834480       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:07.834523       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:28:07.834533       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1227 20:28:07.839106       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:28:07.839704       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:28:07.839801       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:28:07.839881       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:28:07.839906       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:28:07.839994       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:28:07.839980       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:28:07.843790       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:28:07.859936       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:28:08.202618       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:28:08.258252       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:28:08.280202       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:28:08.287727       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:28:08.298665       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:28:08.359900       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.50.78"}
	I1227 20:28:08.372155       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.89.181"}
	I1227 20:28:08.737069       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:28:11.371878       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:28:11.471744       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:28:11.621367       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ccbaca54231342c2e05e7f6f39601d2b1cde71a9e688363aa99cd9f16b7b740b] <==
	I1227 20:28:10.978642       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978686       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978714       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978755       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978788       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.978818       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.977998       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979001       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.977878       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:10.977992       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979604       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979081       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979094       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979062       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979104       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979112       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.985564       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:10.979122       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979129       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.979073       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:10.992951       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:11.078494       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:11.078521       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:28:11.078528       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:28:11.086516       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [e95e2820359521d157e6543a261cc1ecc9b5fcfaec66bf820cd24c038ec2d52f] <==
	I1227 20:28:08.680604       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:28:08.745496       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:08.845649       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:08.845691       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1227 20:28:08.845780       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:28:08.871818       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:28:08.871876       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:28:08.878621       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:28:08.879130       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:28:08.879163       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:08.881396       1 config.go:200] "Starting service config controller"
	I1227 20:28:08.881417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:28:08.881445       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:28:08.881451       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:28:08.881519       1 config.go:309] "Starting node config controller"
	I1227 20:28:08.881525       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:28:08.881534       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:28:08.881837       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:28:08.881850       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:28:08.981907       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:28:08.982002       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:28:08.982459       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7e08e2ae41d9f45611fbcd90fd0c42176961f4b3b41557748ee9cae592b1a0c6] <==
	I1227 20:28:06.131767       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:28:07.772433       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:28:07.772470       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:28:07.772482       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:28:07.772496       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:28:07.814936       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:28:07.814969       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:07.818476       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:28:07.818903       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:28:07.818981       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:07.819035       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:28:07.919309       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: E1227 20:28:25.237004     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: I1227 20:28:25.237035     709 scope.go:122] "RemoveContainer" containerID="c06e8f182bdafe6d8a9f38a86e7345156bb2efd9ed93ead9991c0833628623c8"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: I1227 20:28:25.333995     709 scope.go:122] "RemoveContainer" containerID="c06e8f182bdafe6d8a9f38a86e7345156bb2efd9ed93ead9991c0833628623c8"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: E1227 20:28:25.334203     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: I1227 20:28:25.334236     709 scope.go:122] "RemoveContainer" containerID="9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65"
	Dec 27 20:28:25 no-preload-014435 kubelet[709]: E1227 20:28:25.334424     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zw6bk_kubernetes-dashboard(4a798b25-d765-486c-826b-777556a58641)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" podUID="4a798b25-d765-486c-826b-777556a58641"
	Dec 27 20:28:31 no-preload-014435 kubelet[709]: E1227 20:28:31.261217     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:31 no-preload-014435 kubelet[709]: I1227 20:28:31.261250     709 scope.go:122] "RemoveContainer" containerID="9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65"
	Dec 27 20:28:31 no-preload-014435 kubelet[709]: E1227 20:28:31.261398     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zw6bk_kubernetes-dashboard(4a798b25-d765-486c-826b-777556a58641)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" podUID="4a798b25-d765-486c-826b-777556a58641"
	Dec 27 20:28:39 no-preload-014435 kubelet[709]: I1227 20:28:39.370550     709 scope.go:122] "RemoveContainer" containerID="b9a65772953dd8fa2257dfb6c558509023e860f7da1e48e5f6bfae91bae57d63"
	Dec 27 20:28:47 no-preload-014435 kubelet[709]: E1227 20:28:47.879865     709 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvrq6" containerName="coredns"
	Dec 27 20:28:52 no-preload-014435 kubelet[709]: E1227 20:28:52.237041     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:52 no-preload-014435 kubelet[709]: I1227 20:28:52.237081     709 scope.go:122] "RemoveContainer" containerID="9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65"
	Dec 27 20:28:53 no-preload-014435 kubelet[709]: I1227 20:28:53.413239     709 scope.go:122] "RemoveContainer" containerID="9b37c4ffb76d82b01711eaee7de75c77f95dfb0a270e8c04874ff3747d88ea65"
	Dec 27 20:28:53 no-preload-014435 kubelet[709]: E1227 20:28:53.413491     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:53 no-preload-014435 kubelet[709]: I1227 20:28:53.413522     709 scope.go:122] "RemoveContainer" containerID="a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1"
	Dec 27 20:28:53 no-preload-014435 kubelet[709]: E1227 20:28:53.413714     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zw6bk_kubernetes-dashboard(4a798b25-d765-486c-826b-777556a58641)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" podUID="4a798b25-d765-486c-826b-777556a58641"
	Dec 27 20:29:01 no-preload-014435 kubelet[709]: E1227 20:29:01.261009     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:01 no-preload-014435 kubelet[709]: I1227 20:29:01.261061     709 scope.go:122] "RemoveContainer" containerID="a67c4e635ce57fb32f62f332eda073862febf665cb42072d6e2b8208a2cf15b1"
	Dec 27 20:29:01 no-preload-014435 kubelet[709]: E1227 20:29:01.261266     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zw6bk_kubernetes-dashboard(4a798b25-d765-486c-826b-777556a58641)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zw6bk" podUID="4a798b25-d765-486c-826b-777556a58641"
	Dec 27 20:29:01 no-preload-014435 kubelet[709]: I1227 20:29:01.700083     709 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 20:29:01 no-preload-014435 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:29:01 no-preload-014435 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:29:01 no-preload-014435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:29:01 no-preload-014435 systemd[1]: kubelet.service: Consumed 1.789s CPU time.
	
	
	==> kubernetes-dashboard [0ca092b3ce3ee7591a69a1325fcce1cc752a14da32abb9827a163b163f63c990] <==
	2025/12/27 20:28:17 Using namespace: kubernetes-dashboard
	2025/12/27 20:28:17 Using in-cluster config to connect to apiserver
	2025/12/27 20:28:17 Using secret token for csrf signing
	2025/12/27 20:28:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:28:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:28:17 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:28:17 Generating JWE encryption key
	2025/12/27 20:28:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:28:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:28:17 Initializing JWE encryption key from synchronized object
	2025/12/27 20:28:17 Creating in-cluster Sidecar client
	2025/12/27 20:28:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:17 Serving insecurely on HTTP port: 9090
	2025/12/27 20:28:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:17 Starting overwatch
	
	
	==> storage-provisioner [b9a65772953dd8fa2257dfb6c558509023e860f7da1e48e5f6bfae91bae57d63] <==
	I1227 20:28:08.615982       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:28:38.619292       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c901e1afa45cf91c5793bd5e02e20608288cabb56b1e113347e01c6d78bf95e4] <==
	I1227 20:28:39.430676       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:28:39.430738       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:28:39.432968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:42.888083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:47.149157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:50.748132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:53.802850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:56.824954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:56.831882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:28:56.832090       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:28:56.832283       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-014435_bd966e42-df16-4cba-af02-8db2023e0a1b!
	I1227 20:28:56.833440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f57cc67-39ae-4412-b3d6-f5e4088a0ea3", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-014435_bd966e42-df16-4cba-af02-8db2023e0a1b became leader
	W1227 20:28:56.839793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:56.849441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:28:56.933062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-014435_bd966e42-df16-4cba-af02-8db2023e0a1b!
	W1227 20:28:58.852202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:28:58.857286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:00.861992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:00.870025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:02.873373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:02.878105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:04.882638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:04.891753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:06.895173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:06.913582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-014435 -n no-preload-014435
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-014435 -n no-preload-014435: exit status 2 (370.972444ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-014435 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.855499ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-307728
helpers_test.go:244: (dbg) docker inspect newest-cni-307728:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6",
	        "Created": "2025-12-27T20:28:53.126304312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:28:53.160509192Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/hosts",
	        "LogPath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6-json.log",
	        "Name": "/newest-cni-307728",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-307728:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-307728",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6",
	                "LowerDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-307728",
	                "Source": "/var/lib/docker/volumes/newest-cni-307728/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-307728",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-307728",
	                "name.minikube.sigs.k8s.io": "newest-cni-307728",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "09640e9d5b8fdb67196cb828c4c165df6b97f81e1e7f7060fdb4256822916b28",
	            "SandboxKey": "/var/run/docker/netns/09640e9d5b8f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-307728": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d45b1129d8ffab2533043d5d1454842b3b9f2cbc16e12ecfd948c089f363538",
	                    "EndpointID": "d8b4596edd57d54f3831659c6ada4c370f7d64b4e49e27abc1f0afcb1c6a6cd0",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:5f:aa:6f:70:92",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-307728",
	                        "64c609a6122e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-307728 -n newest-cni-307728
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-307728 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-307728 logs -n 25: (1.222477885s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p bridge-436655                                                                                                                                                                                                                              │ bridge-436655                │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p no-preload-014435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │                     │
	│ delete  │ -p disable-driver-mounts-541137                                                                                                                                                                                                               │ disable-driver-mounts-541137 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ stop    │ -p no-preload-014435 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p no-preload-014435 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:27 UTC │
	│ start   │ -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:27 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-820583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p embed-certs-820583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-954154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-820583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-820583           │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-954154 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ image   │ old-k8s-version-762177 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ pause   │ -p old-k8s-version-762177 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-954154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177       │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ no-preload-014435 image list --format=json                                                                                                                                                                                                    │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p no-preload-014435 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-307728            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:28:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:28:48.500169  340625 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:28:48.500408  340625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:48.500418  340625 out.go:374] Setting ErrFile to fd 2...
	I1227 20:28:48.500422  340625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:48.500700  340625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:28:48.501196  340625 out.go:368] Setting JSON to false
	I1227 20:28:48.502349  340625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4277,"bootTime":1766863051,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:28:48.502402  340625 start.go:143] virtualization: kvm guest
	I1227 20:28:48.504445  340625 out.go:179] * [newest-cni-307728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:28:48.506067  340625 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:28:48.506073  340625 notify.go:221] Checking for updates...
	I1227 20:28:48.507389  340625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:28:48.510117  340625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:48.511411  340625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:28:48.516227  340625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:28:48.520113  340625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:28:48.522736  340625 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:48.522908  340625 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:48.523079  340625 config.go:182] Loaded profile config "no-preload-014435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:48.523223  340625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:28:48.555608  340625 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:28:48.555757  340625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:48.625448  340625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:28:48.613118826 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:48.625559  340625 docker.go:319] overlay module found
	I1227 20:28:48.627785  340625 out.go:179] * Using the docker driver based on user configuration
	I1227 20:28:48.628870  340625 start.go:309] selected driver: docker
	I1227 20:28:48.628893  340625 start.go:928] validating driver "docker" against <nil>
	I1227 20:28:48.628904  340625 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:28:48.629485  340625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:28:48.682637  340625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-27 20:28:48.673679788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:28:48.682799  340625 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1227 20:28:48.682830  340625 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1227 20:28:48.683062  340625 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:28:48.684798  340625 out.go:179] * Using Docker driver with root privileges
	I1227 20:28:48.685773  340625 cni.go:84] Creating CNI manager for ""
	I1227 20:28:48.685860  340625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:48.685876  340625 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:28:48.685963  340625 start.go:353] cluster config:
	{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:48.687269  340625 out.go:179] * Starting "newest-cni-307728" primary control-plane node in "newest-cni-307728" cluster
	I1227 20:28:48.688261  340625 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:28:48.689286  340625 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:28:48.690196  340625 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:48.690233  340625 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:28:48.690256  340625 cache.go:65] Caching tarball of preloaded images
	I1227 20:28:48.690277  340625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:28:48.690345  340625 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:28:48.690356  340625 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:28:48.690441  340625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:28:48.690458  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json: {Name:mke21830f72797f51981ebb2ed1e325363bf8b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:48.710101  340625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:28:48.710116  340625 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:28:48.710130  340625 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:28:48.710159  340625 start.go:360] acquireMachinesLock for newest-cni-307728: {Name:mk68119d6288f8bd1ffbe980f508592c691efe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:28:48.710240  340625 start.go:364] duration metric: took 67.403µs to acquireMachinesLock for "newest-cni-307728"
	I1227 20:28:48.710260  340625 start.go:93] Provisioning new machine with config: &{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:48.710333  340625 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:28:48.407162  329454 pod_ready.go:83] waiting for pod "kube-proxy-ctvzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:48.806633  329454 pod_ready.go:94] pod "kube-proxy-ctvzq" is "Ready"
	I1227 20:28:48.806662  329454 pod_ready.go:86] duration metric: took 399.473531ms for pod "kube-proxy-ctvzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:49.008047  329454 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:49.407573  329454 pod_ready.go:94] pod "kube-scheduler-no-preload-014435" is "Ready"
	I1227 20:28:49.407604  329454 pod_ready.go:86] duration metric: took 399.528497ms for pod "kube-scheduler-no-preload-014435" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:28:49.407621  329454 pod_ready.go:40] duration metric: took 39.908277209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:49.460861  329454 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:28:49.462607  329454 out.go:179] * Done! kubectl is now configured to use "no-preload-014435" cluster and "default" namespace by default
	I1227 20:28:47.861047  340025 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-954154" ...
	I1227 20:28:47.861132  340025 cli_runner.go:164] Run: docker start default-k8s-diff-port-954154
	I1227 20:28:48.268150  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:48.287351  340025 kic.go:430] container "default-k8s-diff-port-954154" state is running.
	I1227 20:28:48.287786  340025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-954154
	I1227 20:28:48.306988  340025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/config.json ...
	I1227 20:28:48.307243  340025 machine.go:94] provisionDockerMachine start ...
	I1227 20:28:48.307328  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:48.327277  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:48.327553  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:48.327574  340025 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:28:48.328160  340025 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45072->127.0.0.1:33123: read: connection reset by peer
	I1227 20:28:51.449690  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-954154
	
	I1227 20:28:51.449714  340025 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-954154"
	I1227 20:28:51.449773  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:51.468688  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:51.468993  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:51.469013  340025 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-954154 && echo "default-k8s-diff-port-954154" | sudo tee /etc/hostname
	I1227 20:28:51.599535  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-954154
	
	I1227 20:28:51.599621  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:51.617442  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:51.617738  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:51.617772  340025 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-954154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-954154/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-954154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:51.741706  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:51.741734  340025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:51.741756  340025 ubuntu.go:190] setting up certificates
	I1227 20:28:51.741775  340025 provision.go:84] configureAuth start
	I1227 20:28:51.741846  340025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-954154
	I1227 20:28:51.759749  340025 provision.go:143] copyHostCerts
	I1227 20:28:51.759817  340025 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:51.759836  340025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:51.759925  340025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:51.760058  340025 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:51.760071  340025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:51.760106  340025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:51.760260  340025 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:51.760273  340025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:51.760304  340025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:51.760381  340025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-954154 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-954154 localhost minikube]
	I1227 20:28:51.832661  340025 provision.go:177] copyRemoteCerts
	I1227 20:28:51.832729  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:51.832777  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:51.851087  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:51.942212  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:51.959450  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 20:28:51.975995  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:28:51.992950  340025 provision.go:87] duration metric: took 251.152964ms to configureAuth
	I1227 20:28:51.992976  340025 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:51.993158  340025 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:51.993315  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:52.011400  340025 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:52.011631  340025 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1227 20:28:52.011652  340025 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1227 20:28:48.641299  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:28:51.141968  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:28:48.711836  340625 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:28:48.712071  340625 start.go:159] libmachine.API.Create for "newest-cni-307728" (driver="docker")
	I1227 20:28:48.712103  340625 client.go:173] LocalClient.Create starting
	I1227 20:28:48.712145  340625 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem
	I1227 20:28:48.712175  340625 main.go:144] libmachine: Decoding PEM data...
	I1227 20:28:48.712193  340625 main.go:144] libmachine: Parsing certificate...
	I1227 20:28:48.712238  340625 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem
	I1227 20:28:48.712255  340625 main.go:144] libmachine: Decoding PEM data...
	I1227 20:28:48.712265  340625 main.go:144] libmachine: Parsing certificate...
	I1227 20:28:48.712589  340625 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:28:48.727613  340625 cli_runner.go:211] docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:28:48.727667  340625 network_create.go:284] running [docker network inspect newest-cni-307728] to gather additional debugging logs...
	I1227 20:28:48.727682  340625 cli_runner.go:164] Run: docker network inspect newest-cni-307728
	W1227 20:28:48.743879  340625 cli_runner.go:211] docker network inspect newest-cni-307728 returned with exit code 1
	I1227 20:28:48.743905  340625 network_create.go:287] error running [docker network inspect newest-cni-307728]: docker network inspect newest-cni-307728: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-307728 not found
	I1227 20:28:48.743927  340625 network_create.go:289] output of [docker network inspect newest-cni-307728]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-307728 not found
	
	** /stderr **
	I1227 20:28:48.744059  340625 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:48.760635  340625 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0db0ba8938bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:24:76:f1:9a:26} reservation:<nil>}
	I1227 20:28:48.761253  340625 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11f8d597a005 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:b4:6c:7e:ff:91} reservation:<nil>}
	I1227 20:28:48.762075  340625 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7cf3350a110 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:14:0b:19:b4:4d} reservation:<nil>}
	I1227 20:28:48.762703  340625 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-df613bfb14c3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:e7:81:22:a5:aa} reservation:<nil>}
	I1227 20:28:48.763398  340625 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8bb8ec9ff71c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:ba:45:ee:97:15} reservation:<nil>}
	I1227 20:28:48.763977  340625 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-da47a33f1df0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:b6:9e:57:b1:b3:31} reservation:<nil>}
	I1227 20:28:48.764830  340625 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee9150}
	I1227 20:28:48.764849  340625 network_create.go:124] attempt to create docker network newest-cni-307728 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1227 20:28:48.764905  340625 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-307728 newest-cni-307728
	I1227 20:28:48.812651  340625 network_create.go:108] docker network newest-cni-307728 192.168.103.0/24 created
	I1227 20:28:48.812678  340625 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-307728" container
	I1227 20:28:48.812754  340625 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:28:48.829760  340625 cli_runner.go:164] Run: docker volume create newest-cni-307728 --label name.minikube.sigs.k8s.io=newest-cni-307728 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:28:48.846811  340625 oci.go:103] Successfully created a docker volume newest-cni-307728
	I1227 20:28:48.846879  340625 cli_runner.go:164] Run: docker run --rm --name newest-cni-307728-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-307728 --entrypoint /usr/bin/test -v newest-cni-307728:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:28:49.251301  340625 oci.go:107] Successfully prepared a docker volume newest-cni-307728
	I1227 20:28:49.251356  340625 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:49.251371  340625 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:28:49.251443  340625 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-307728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:28:53.048259  340625 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-307728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.796770201s)
	I1227 20:28:53.048298  340625 kic.go:203] duration metric: took 3.796923553s to extract preloaded images to volume ...
	W1227 20:28:53.048388  340625 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1227 20:28:53.048428  340625 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1227 20:28:53.048478  340625 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:28:53.106204  340625 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-307728 --name newest-cni-307728 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-307728 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-307728 --network newest-cni-307728 --ip 192.168.103.2 --volume newest-cni-307728:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:28:53.377715  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Running}}
	I1227 20:28:53.396903  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:28:53.419741  340625 cli_runner.go:164] Run: docker exec newest-cni-307728 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:28:53.470424  340625 oci.go:144] the created container "newest-cni-307728" has a running status.
	I1227 20:28:53.470467  340625 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa...
	I1227 20:28:53.122891  340025 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:53.122939  340025 machine.go:97] duration metric: took 4.815676959s to provisionDockerMachine
	I1227 20:28:53.122954  340025 start.go:293] postStartSetup for "default-k8s-diff-port-954154" (driver="docker")
	I1227 20:28:53.122967  340025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:53.123032  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:53.123077  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.143650  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.241428  340025 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:53.245431  340025 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:53.245463  340025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:53.245476  340025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:53.245527  340025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:53.245638  340025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:53.245750  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:53.259754  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:53.277646  340025 start.go:296] duration metric: took 154.680132ms for postStartSetup
	I1227 20:28:53.277719  340025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:53.277784  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.296596  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.388191  340025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:53.393370  340025 fix.go:56] duration metric: took 5.55357325s for fixHost
	I1227 20:28:53.393399  340025 start.go:83] releasing machines lock for "default-k8s-diff-port-954154", held for 5.5536385s
	I1227 20:28:53.393469  340025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-954154
	I1227 20:28:53.414844  340025 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:53.414940  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.414964  340025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:53.415054  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:53.439070  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.441062  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:53.530425  340025 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:53.593039  340025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:28:53.635364  340025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:53.640899  340025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:53.641235  340025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:53.650333  340025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:28:53.650354  340025 start.go:496] detecting cgroup driver to use...
	I1227 20:28:53.650397  340025 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:53.650439  340025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:53.668529  340025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:53.689731  340025 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:53.689800  340025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:53.709961  340025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:53.727001  340025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:53.832834  340025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:53.922310  340025 docker.go:234] disabling docker service ...
	I1227 20:28:53.922365  340025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:53.936501  340025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:53.950162  340025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:54.041512  340025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:54.134155  340025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:54.147385  340025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:54.161720  340025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:54.161796  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.171044  340025 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:54.171106  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.180065  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.189150  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.197442  340025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:54.205514  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.213985  340025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.222017  340025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:54.230077  340025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:54.237131  340025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:54.244100  340025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:54.328674  340025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:54.465250  340025 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:54.465333  340025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:54.469352  340025 start.go:574] Will wait 60s for crictl version
	I1227 20:28:54.469401  340025 ssh_runner.go:195] Run: which crictl
	I1227 20:28:54.472893  340025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:54.498891  340025 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:54.498989  340025 ssh_runner.go:195] Run: crio --version
	I1227 20:28:54.526206  340025 ssh_runner.go:195] Run: crio --version
	I1227 20:28:54.556148  340025 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:54.557374  340025 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-954154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:54.575049  340025 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:54.578875  340025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:54.588766  340025 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-954154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-954154 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:54.588870  340025 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:54.588927  340025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:54.619011  340025 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:54.619030  340025 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:28:54.619069  340025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:54.646154  340025 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:54.646177  340025 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:54.646185  340025 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I1227 20:28:54.646334  340025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-954154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-954154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:54.646422  340025 ssh_runner.go:195] Run: crio config
	I1227 20:28:54.692232  340025 cni.go:84] Creating CNI manager for ""
	I1227 20:28:54.692253  340025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:54.692268  340025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:28:54.692305  340025 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-954154 NodeName:default-k8s-diff-port-954154 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:54.692423  340025 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-954154"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:54.692483  340025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:54.700975  340025 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:54.701056  340025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:54.709484  340025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 20:28:54.722400  340025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:54.735438  340025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1227 20:28:54.747514  340025 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:54.751059  340025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:54.761269  340025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:54.842277  340025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:54.869119  340025 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154 for IP: 192.168.85.2
	I1227 20:28:54.869143  340025 certs.go:195] generating shared ca certs ...
	I1227 20:28:54.869164  340025 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:54.869322  340025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:54.869377  340025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:54.869391  340025 certs.go:257] generating profile certs ...
	I1227 20:28:54.869519  340025 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/client.key
	I1227 20:28:54.869600  340025 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/apiserver.key.b37aaa7a
	I1227 20:28:54.869654  340025 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/proxy-client.key
	I1227 20:28:54.869797  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:54.869837  340025 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:54.869849  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:54.869881  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:54.869933  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:54.869976  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:54.870034  340025 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:54.870823  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:54.889499  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:54.908467  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:54.928722  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:54.956319  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:28:54.976184  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:28:54.992715  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:55.009591  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/default-k8s-diff-port-954154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:28:55.025543  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:55.042531  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:55.061224  340025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:55.081310  340025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:55.095303  340025 ssh_runner.go:195] Run: openssl version
	I1227 20:28:55.101512  340025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.109364  340025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:55.117062  340025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.120521  340025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.120562  340025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:55.156522  340025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:55.163769  340025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.170984  340025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:55.178467  340025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.182664  340025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.182714  340025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:55.216669  340025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:55.224508  340025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.231727  340025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:55.240655  340025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.244863  340025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.244927  340025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:55.281470  340025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:55.288784  340025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:55.292510  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:28:55.333080  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:28:55.369784  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:28:55.424693  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:28:55.475758  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:28:55.533819  340025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:28:55.591758  340025 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-954154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-954154 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:55.591848  340025 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:55.591890  340025 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:55.627989  340025 cri.go:96] found id: "5931439a8c0a571b5456aa8ff89ec7efa4a328c297d4825276b5ce8da13b99a6"
	I1227 20:28:55.628014  340025 cri.go:96] found id: "706a22c5fabaf1ee5036f9f7e94b4c7a02acaeb2002009e6416b6682929512f2"
	I1227 20:28:55.628020  340025 cri.go:96] found id: "0959afbe1a9958167e4657feae2ee275c220e183182d01f3bfb3c248002f75ad"
	I1227 20:28:55.628027  340025 cri.go:96] found id: "8b0860861ddb032e4078fd3575653ab5f5a77cc25e17d9ecb7910330acbef6e8"
	I1227 20:28:55.628032  340025 cri.go:96] found id: ""
	I1227 20:28:55.628077  340025 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:28:55.642876  340025 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:28:55Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:28:55.642973  340025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:55.652554  340025 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:28:55.652578  340025 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:28:55.652625  340025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:28:55.660979  340025 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:28:55.662107  340025 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-954154" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:55.662856  340025 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-954154" cluster setting kubeconfig missing "default-k8s-diff-port-954154" context setting]
	I1227 20:28:55.664153  340025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:55.666338  340025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:28:55.676564  340025 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 20:28:55.676593  340025 kubeadm.go:602] duration metric: took 24.008347ms to restartPrimaryControlPlane
	I1227 20:28:55.676602  340025 kubeadm.go:403] duration metric: took 84.854268ms to StartCluster
	I1227 20:28:55.676617  340025 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:55.676673  340025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:28:55.678946  340025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:55.679180  340025 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:28:55.679553  340025 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:55.679619  340025 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:28:55.679775  340025 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:55.679791  340025 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-954154"
	W1227 20:28:55.679799  340025 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:28:55.679823  340025 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:55.679928  340025 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:55.679956  340025 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-954154"
	W1227 20:28:55.679964  340025 addons.go:248] addon dashboard should already be in state true
	I1227 20:28:55.679991  340025 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:55.680547  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.680638  340025 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-954154"
	I1227 20:28:55.680657  340025 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-954154"
	I1227 20:28:55.681186  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.683393  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.683653  340025 out.go:179] * Verifying Kubernetes components...
	I1227 20:28:55.684780  340025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:55.714633  340025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:28:55.716852  340025 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-954154"
	W1227 20:28:55.716877  340025 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:28:55.716906  340025 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:28:55.717089  340025 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:28:55.717135  340025 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:55.717147  340025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:28:55.717215  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:55.717777  340025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:28:55.722759  340025 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:28:53.664479  340625 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:28:53.699126  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:28:53.720716  340625 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:28:53.720739  340625 kic_runner.go:114] Args: [docker exec --privileged newest-cni-307728 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:28:53.774092  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:28:53.795085  340625 machine.go:94] provisionDockerMachine start ...
	I1227 20:28:53.795200  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:53.815121  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:53.815367  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:53.815380  340625 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:28:53.946421  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:28:53.946449  340625 ubuntu.go:182] provisioning hostname "newest-cni-307728"
	I1227 20:28:53.946514  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:53.967479  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:53.967688  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:53.967701  340625 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-307728 && echo "newest-cni-307728" | sudo tee /etc/hostname
	I1227 20:28:54.109706  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:28:54.109778  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.129736  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:54.129958  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:54.129980  340625 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307728/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:28:54.255088  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:28:54.255111  340625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:28:54.255160  340625 ubuntu.go:190] setting up certificates
	I1227 20:28:54.255172  340625 provision.go:84] configureAuth start
	I1227 20:28:54.255217  340625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:28:54.276936  340625 provision.go:143] copyHostCerts
	I1227 20:28:54.276997  340625 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:28:54.277008  340625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:28:54.277094  340625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:28:54.277219  340625 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:28:54.277228  340625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:28:54.277279  340625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:28:54.277365  340625 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:28:54.277372  340625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:28:54.277407  340625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:28:54.277482  340625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-307728 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-307728]
	I1227 20:28:54.307332  340625 provision.go:177] copyRemoteCerts
	I1227 20:28:54.307382  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:28:54.307415  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.325258  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:54.419033  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:28:54.438154  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:28:54.455050  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:28:54.472709  340625 provision.go:87] duration metric: took 217.519219ms to configureAuth
	I1227 20:28:54.472736  340625 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:28:54.472956  340625 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:54.473073  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.492336  340625 main.go:144] libmachine: Using SSH client type: native
	I1227 20:28:54.492642  340625 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1227 20:28:54.492669  340625 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:28:54.753361  340625 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:28:54.753389  340625 machine.go:97] duration metric: took 958.279107ms to provisionDockerMachine
	I1227 20:28:54.753401  340625 client.go:176] duration metric: took 6.041292407s to LocalClient.Create
	I1227 20:28:54.753424  340625 start.go:167] duration metric: took 6.041353878s to libmachine.API.Create "newest-cni-307728"
	I1227 20:28:54.753439  340625 start.go:293] postStartSetup for "newest-cni-307728" (driver="docker")
	I1227 20:28:54.753451  340625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:28:54.753523  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:28:54.753568  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.772791  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:54.870458  340625 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:28:54.874573  340625 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:28:54.874605  340625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:28:54.874618  340625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:28:54.874671  340625 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:28:54.874756  340625 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:28:54.874874  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:28:54.883036  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:54.906634  340625 start.go:296] duration metric: took 153.179795ms for postStartSetup
	I1227 20:28:54.907029  340625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:28:54.928933  340625 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:28:54.929249  340625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:28:54.929300  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:54.954691  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:55.044982  340625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:28:55.049336  340625 start.go:128] duration metric: took 6.338989786s to createHost
	I1227 20:28:55.049357  340625 start.go:83] releasing machines lock for "newest-cni-307728", held for 6.339107658s
	I1227 20:28:55.049418  340625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:28:55.070462  340625 ssh_runner.go:195] Run: cat /version.json
	I1227 20:28:55.070526  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:55.070556  340625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:28:55.070631  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:28:55.089304  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:55.090352  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:28:55.233933  340625 ssh_runner.go:195] Run: systemctl --version
	I1227 20:28:55.241758  340625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:28:55.275894  340625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:28:55.280648  340625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:28:55.280715  340625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:28:55.307733  340625 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1227 20:28:55.307753  340625 start.go:496] detecting cgroup driver to use...
	I1227 20:28:55.307785  340625 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:28:55.307839  340625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:28:55.323192  340625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:28:55.335205  340625 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:28:55.335265  340625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:28:55.351180  340625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:28:55.369175  340625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:28:55.473778  340625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:28:55.602420  340625 docker.go:234] disabling docker service ...
	I1227 20:28:55.602482  340625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:28:55.625550  340625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:28:55.643841  340625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:28:55.802566  340625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:28:55.918642  340625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:28:55.936192  340625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:28:55.955225  340625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:28:55.955288  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:55.966672  340625 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:28:55.966742  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:55.978502  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:55.989239  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.000177  340625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:28:56.009564  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.022264  340625 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.037345  340625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:28:56.049451  340625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:28:56.056993  340625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:28:56.064796  340625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:56.162614  340625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:28:56.313255  340625 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:28:56.313324  340625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:28:56.317889  340625 start.go:574] Will wait 60s for crictl version
	I1227 20:28:56.317981  340625 ssh_runner.go:195] Run: which crictl
	I1227 20:28:56.322051  340625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:28:56.349449  340625 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:28:56.349532  340625 ssh_runner.go:195] Run: crio --version
	I1227 20:28:56.382000  340625 ssh_runner.go:195] Run: crio --version
	I1227 20:28:56.413048  340625 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:28:56.414278  340625 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:28:56.433453  340625 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 20:28:56.437559  340625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:56.449949  340625 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:28:55.724000  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:28:55.724016  340025 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:28:55.724065  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:55.744982  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:55.748159  340025 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:55.748180  340025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:28:55.748239  340025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:28:55.754480  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:55.778258  340025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:28:55.867117  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:28:55.867140  340025 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:28:55.872461  340025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:28:55.875196  340025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:55.883874  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:28:55.883895  340025 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:28:55.887443  340025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:28:55.901123  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:28:55.901148  340025 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:28:55.918460  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:28:55.918485  340025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:28:55.937315  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:28:55.937335  340025 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:28:55.952528  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:28:55.952556  340025 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:28:55.967852  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:28:55.967875  340025 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:28:55.984392  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:28:55.984418  340025 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:28:56.000591  340025 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:56.000616  340025 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:28:56.014356  340025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:28:57.527657  340025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.655168953s)
	I1227 20:28:57.527711  340025 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.652481169s)
	I1227 20:28:57.527762  340025 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-954154" to be "Ready" ...
	I1227 20:28:57.527787  340025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.513395036s)
	I1227 20:28:57.527723  340025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.640259424s)
	I1227 20:28:57.529812  340025 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-954154 addons enable metrics-server
	
	I1227 20:28:57.536501  340025 node_ready.go:49] node "default-k8s-diff-port-954154" is "Ready"
	I1227 20:28:57.536525  340025 node_ready.go:38] duration metric: took 8.726968ms for node "default-k8s-diff-port-954154" to be "Ready" ...
	I1227 20:28:57.536540  340025 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:28:57.536581  340025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:28:57.541048  340025 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:28:57.542123  340025 addons.go:530] duration metric: took 1.862504727s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:28:57.549333  340025 api_server.go:72] duration metric: took 1.870126325s to wait for apiserver process to appear ...
	I1227 20:28:57.549353  340025 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:28:57.549370  340025 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:57.553748  340025 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:57.553768  340025 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:56.450940  340625 kubeadm.go:884] updating cluster {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:28:56.451057  340625 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:28:56.451105  340625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:56.486578  340625 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:56.486604  340625 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:28:56.486659  340625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:28:56.516779  340625 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:28:56.516806  340625 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:28:56.516814  340625 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I1227 20:28:56.516942  340625 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-307728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:28:56.517034  340625 ssh_runner.go:195] Run: crio config
	I1227 20:28:56.564462  340625 cni.go:84] Creating CNI manager for ""
	I1227 20:28:56.564481  340625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:28:56.564497  340625 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:28:56.564520  340625 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-307728 NodeName:newest-cni-307728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:28:56.564660  340625 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-307728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:28:56.564717  340625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:28:56.574206  340625 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:28:56.574276  340625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:28:56.582079  340625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:28:56.600287  340625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:28:56.616380  340625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:28:56.629039  340625 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:28:56.632734  340625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:28:56.643610  340625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:28:56.731167  340625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:28:56.767503  340625 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728 for IP: 192.168.103.2
	I1227 20:28:56.767525  340625 certs.go:195] generating shared ca certs ...
	I1227 20:28:56.767558  340625 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.767733  340625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:28:56.767803  340625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:28:56.767817  340625 certs.go:257] generating profile certs ...
	I1227 20:28:56.767890  340625 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key
	I1227 20:28:56.767942  340625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.crt with IP's: []
	I1227 20:28:56.794375  340625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.crt ...
	I1227 20:28:56.794408  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.crt: {Name:mkbe31918a2628f8309a18a3c482be7f59d5e510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.794621  340625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key ...
	I1227 20:28:56.794636  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key: {Name:mkbc3d519f763199b338bf70577fc2817f7c4332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.794741  340625 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df
	I1227 20:28:56.794772  340625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1227 20:28:56.879148  340625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df ...
	I1227 20:28:56.879178  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df: {Name:mk64269dd374c740149f7faf9e729189e8331f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.879382  340625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df ...
	I1227 20:28:56.879400  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df: {Name:mkc2c754a6d53e33d9862453e662ca2209e188d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.879503  340625 certs.go:382] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt.f45295df -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt
	I1227 20:28:56.879600  340625 certs.go:386] copying /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df -> /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key
	I1227 20:28:56.879659  340625 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key
	I1227 20:28:56.879674  340625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt with IP's: []
	I1227 20:28:56.951167  340625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt ...
	I1227 20:28:56.951204  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt: {Name:mk61de4f8eabcfb14024a7f87b814c37a2ed9228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.951385  340625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key ...
	I1227 20:28:56.951404  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key: {Name:mk921c81a121096b317f7cf3e18e26665afa5455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:28:56.951654  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:28:56.951708  340625 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:28:56.951725  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:28:56.951762  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:28:56.951794  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:28:56.951828  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:28:56.951885  340625 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:28:56.952685  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:28:56.989260  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:28:57.016199  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:28:57.038448  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:28:57.063231  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:28:57.083056  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:28:57.103361  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:28:57.124895  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:28:57.146997  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:28:57.168985  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:28:57.192337  340625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:28:57.212648  340625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:28:57.226053  340625 ssh_runner.go:195] Run: openssl version
	I1227 20:28:57.232690  340625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.240634  340625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:28:57.248278  340625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.253026  340625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.253083  340625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:28:57.293170  340625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:28:57.302311  340625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:28:57.310221  340625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.319835  340625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:28:57.328517  340625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.333508  340625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.333570  340625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:28:57.385727  340625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:28:57.395544  340625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/14427.pem /etc/ssl/certs/51391683.0
	I1227 20:28:57.405387  340625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.414374  340625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:28:57.422682  340625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.426727  340625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.426781  340625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:28:57.468027  340625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:57.475579  340625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/144272.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:28:57.482669  340625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:28:57.486989  340625 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:28:57.487049  340625 kubeadm.go:401] StartCluster: {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:28:57.487127  340625 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:28:57.487176  340625 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:28:57.526113  340625 cri.go:96] found id: ""
	I1227 20:28:57.526185  340625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:28:57.535676  340625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:28:57.544346  340625 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:28:57.544400  340625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:28:57.552362  340625 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:28:57.552381  340625 kubeadm.go:158] found existing configuration files:
	
	I1227 20:28:57.552419  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:28:57.559859  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:28:57.559901  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:28:57.566894  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:28:57.574224  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:28:57.574271  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:28:57.581383  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:28:57.588654  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:28:57.588689  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:28:57.595675  340625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:28:57.603162  340625 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:28:57.603207  340625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:28:57.610120  340625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:28:57.651578  340625 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:28:57.651650  340625 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:28:57.717226  340625 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:28:57.717315  340625 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1227 20:28:57.717358  340625 kubeadm.go:319] OS: Linux
	I1227 20:28:57.717448  340625 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:28:57.717519  340625 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:28:57.717567  340625 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:28:57.717647  340625 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:28:57.717733  340625 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:28:57.717812  340625 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:28:57.717923  340625 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:28:57.717998  340625 kubeadm.go:319] CGROUPS_IO: enabled
	I1227 20:28:57.774331  340625 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:28:57.774452  340625 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:28:57.774590  340625 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:28:57.781865  340625 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1227 20:28:53.641780  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:28:56.141575  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:28:57.784248  340625 out.go:252]   - Generating certificates and keys ...
	I1227 20:28:57.784354  340625 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:28:57.784471  340625 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:28:57.800338  340625 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:28:57.829651  340625 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:28:57.870093  340625 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:28:58.023851  340625 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:28:58.175326  340625 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:28:58.175458  340625 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-307728] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1227 20:28:58.227767  340625 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:28:58.227948  340625 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-307728] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1227 20:28:58.327146  340625 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:28:58.413976  340625 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:28:58.519514  340625 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:28:58.519622  340625 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:28:58.602374  340625 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:28:58.658792  340625 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:28:58.828754  340625 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:28:58.899131  340625 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:28:58.981756  340625 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:28:58.982297  340625 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:28:58.986398  340625 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:28:58.050409  340025 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:58.055041  340025 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:28:58.055071  340025 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:28:58.549732  340025 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1227 20:28:58.554808  340025 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1227 20:28:58.555845  340025 api_server.go:141] control plane version: v1.35.0
	I1227 20:28:58.555884  340025 api_server.go:131] duration metric: took 1.006522468s to wait for apiserver health ...
	I1227 20:28:58.555894  340025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:28:58.607197  340025 system_pods.go:59] 8 kube-system pods found
	I1227 20:28:58.607235  340025 system_pods.go:61] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:58.607245  340025 system_pods.go:61] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:58.607258  340025 system_pods.go:61] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:28:58.607263  340025 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:58.607273  340025 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:58.607281  340025 system_pods.go:61] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:28:58.607286  340025 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:58.607292  340025 system_pods.go:61] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:28:58.607299  340025 system_pods.go:74] duration metric: took 51.39957ms to wait for pod list to return data ...
	I1227 20:28:58.607309  340025 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:28:58.609698  340025 default_sa.go:45] found service account: "default"
	I1227 20:28:58.609718  340025 default_sa.go:55] duration metric: took 2.396384ms for default service account to be created ...
	I1227 20:28:58.609726  340025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:28:58.612207  340025 system_pods.go:86] 8 kube-system pods found
	I1227 20:28:58.612229  340025 system_pods.go:89] "coredns-7d764666f9-gtzdb" [94553f69-88cf-4e2c-94e4-99d2034bcc9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:28:58.612237  340025 system_pods.go:89] "etcd-default-k8s-diff-port-954154" [4eafa22e-2c0b-4d78-90f2-2becf0d0e321] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:28:58.612250  340025 system_pods.go:89] "kindnet-c9zm9" [3b0b3ae7-d30d-4a2d-bbff-21dba59ffb5a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:28:58.612256  340025 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-954154" [f52b07b4-22fb-4e93-bfa4-f86ed39beed6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:28:58.612266  340025 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-954154" [b43b5648-7206-444d-8fdd-7504b26bf16c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:28:58.612271  340025 system_pods.go:89] "kube-proxy-m5zcc" [2ca10db8-75c1-459b-b3ec-bdb128f9d72a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:28:58.612282  340025 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-954154" [b39ea939-f629-4d8d-9874-3234839d3abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:28:58.612294  340025 system_pods.go:89] "storage-provisioner" [e47d55de-82b6-47f6-b639-1c28182777af] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:28:58.612305  340025 system_pods.go:126] duration metric: took 2.569534ms to wait for k8s-apps to be running ...
	I1227 20:28:58.612315  340025 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:28:58.612351  340025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:28:58.624877  340025 system_svc.go:56] duration metric: took 12.557367ms WaitForService to wait for kubelet
	I1227 20:28:58.624898  340025 kubeadm.go:587] duration metric: took 2.945693199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:28:58.624959  340025 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:28:58.627235  340025 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:28:58.627258  340025 node_conditions.go:123] node cpu capacity is 8
	I1227 20:28:58.627273  340025 node_conditions.go:105] duration metric: took 2.308686ms to run NodePressure ...
	I1227 20:28:58.627296  340025 start.go:242] waiting for startup goroutines ...
	I1227 20:28:58.627310  340025 start.go:247] waiting for cluster config update ...
	I1227 20:28:58.627328  340025 start.go:256] writing updated cluster config ...
	I1227 20:28:58.627581  340025 ssh_runner.go:195] Run: rm -f paused
	I1227 20:28:58.631443  340025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:28:58.634602  340025 pod_ready.go:83] waiting for pod "coredns-7d764666f9-gtzdb" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:29:00.640993  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:28:58.641325  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:29:00.641748  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:29:02.647042  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:28:58.987788  340625 out.go:252]   - Booting up control plane ...
	I1227 20:28:58.987909  340625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:28:58.988350  340625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:28:58.991232  340625 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:28:59.009829  340625 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:28:59.010080  340625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:28:59.018540  340625 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:28:59.018939  340625 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:28:59.019013  340625 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:28:59.122102  340625 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:28:59.122243  340625 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:28:59.623869  340625 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.822281ms
	I1227 20:28:59.626698  340625 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:28:59.626835  340625 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1227 20:28:59.626991  340625 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:28:59.627081  340625 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:29:00.132655  340625 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.729406ms
	I1227 20:29:01.422270  340625 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.79552906s
	I1227 20:29:03.128834  340625 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502046336s
	I1227 20:29:03.152511  340625 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:29:03.162776  340625 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:29:03.172127  340625 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:29:03.172413  340625 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-307728 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:29:03.183365  340625 kubeadm.go:319] [bootstrap-token] Using token: m3fv2a.3hy2dotriyukxsjh
	I1227 20:29:03.184664  340625 out.go:252]   - Configuring RBAC rules ...
	I1227 20:29:03.184815  340625 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:29:03.188315  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:29:03.194363  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:29:03.196969  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:29:03.199431  340625 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:29:03.201765  340625 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:29:03.538642  340625 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:29:03.963285  340625 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:29:04.536423  340625 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:29:04.536448  340625 kubeadm.go:319] 
	I1227 20:29:04.536526  340625 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:29:04.536532  340625 kubeadm.go:319] 
	I1227 20:29:04.536632  340625 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:29:04.536638  340625 kubeadm.go:319] 
	I1227 20:29:04.536668  340625 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:29:04.536741  340625 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:29:04.536814  340625 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:29:04.536821  340625 kubeadm.go:319] 
	I1227 20:29:04.536887  340625 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:29:04.536933  340625 kubeadm.go:319] 
	I1227 20:29:04.537020  340625 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:29:04.537030  340625 kubeadm.go:319] 
	I1227 20:29:04.537108  340625 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:29:04.537214  340625 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:29:04.537310  340625 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:29:04.537320  340625 kubeadm.go:319] 
	I1227 20:29:04.537447  340625 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:29:04.537551  340625 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:29:04.537565  340625 kubeadm.go:319] 
	I1227 20:29:04.537685  340625 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token m3fv2a.3hy2dotriyukxsjh \
	I1227 20:29:04.537816  340625 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 \
	I1227 20:29:04.537849  340625 kubeadm.go:319] 	--control-plane 
	I1227 20:29:04.537858  340625 kubeadm.go:319] 
	I1227 20:29:04.537990  340625 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:29:04.538002  340625 kubeadm.go:319] 
	I1227 20:29:04.538113  340625 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token m3fv2a.3hy2dotriyukxsjh \
	I1227 20:29:04.538240  340625 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:da6685172cfe638aa995cfd5ba180acce81fcf1770a93ae27b4f215e6d45ef35 
	I1227 20:29:04.541383  340625 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1227 20:29:04.541573  340625 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:29:04.541611  340625 cni.go:84] Creating CNI manager for ""
	I1227 20:29:04.541626  340625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:04.544110  340625 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 20:29:02.648541  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:05.144552  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:07.640503  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:05.143954  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	W1227 20:29:07.641799  334810 pod_ready.go:104] pod "coredns-7d764666f9-nvnjg" is not "Ready", error: <nil>
	I1227 20:29:04.545428  340625 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:29:04.550767  340625 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:29:04.550785  340625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:29:04.567775  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:29:04.818653  340625 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:29:04.818784  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:04.818826  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-307728 minikube.k8s.io/updated_at=2025_12_27T20_29_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=newest-cni-307728 minikube.k8s.io/primary=true
	I1227 20:29:04.832321  340625 ops.go:34] apiserver oom_adj: -16
	I1227 20:29:04.950068  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:05.450721  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:05.950484  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:06.450842  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:06.950887  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:07.450314  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:07.950776  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:08.450277  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:08.950170  340625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:29:09.021086  340625 kubeadm.go:1114] duration metric: took 4.202374057s to wait for elevateKubeSystemPrivileges
	I1227 20:29:09.021121  340625 kubeadm.go:403] duration metric: took 11.534078831s to StartCluster
	I1227 20:29:09.021138  340625 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:09.021196  340625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:09.022988  340625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:09.023208  340625 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:29:09.023229  340625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:29:09.023298  340625 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:29:09.023402  340625 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:09.023406  340625 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-307728"
	I1227 20:29:09.023426  340625 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-307728"
	I1227 20:29:09.023424  340625 addons.go:70] Setting default-storageclass=true in profile "newest-cni-307728"
	I1227 20:29:09.023447  340625 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-307728"
	I1227 20:29:09.023461  340625 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:09.023847  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:09.024031  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:09.024541  340625 out.go:179] * Verifying Kubernetes components...
	I1227 20:29:09.025708  340625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:09.053461  340625 addons.go:239] Setting addon default-storageclass=true in "newest-cni-307728"
	I1227 20:29:09.053526  340625 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:09.054242  340625 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:09.055626  340625 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:29:09.056962  340625 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:09.056980  340625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:29:09.057033  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:09.084642  340625 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:09.084671  340625 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:29:09.084746  340625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:09.084845  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:09.107735  340625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:09.122433  340625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:29:09.170837  340625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:29:09.190768  340625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:09.209801  340625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:09.285750  340625 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1227 20:29:09.287444  340625 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:29:09.287503  340625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:29:09.477850  340625 api_server.go:72] duration metric: took 454.609308ms to wait for apiserver process to appear ...
	I1227 20:29:09.477876  340625 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:29:09.477893  340625 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:09.483481  340625 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1227 20:29:09.484383  340625 api_server.go:141] control plane version: v1.35.0
	I1227 20:29:09.484408  340625 api_server.go:131] duration metric: took 6.526033ms to wait for apiserver health ...
	I1227 20:29:09.484419  340625 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:29:09.486308  340625 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 20:29:09.487040  340625 system_pods.go:59] 7 kube-system pods found
	I1227 20:29:09.487075  340625 system_pods.go:61] "etcd-newest-cni-307728" [47c59b02-ea05-4deb-a2d5-f33fe18e738b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:29:09.487083  340625 system_pods.go:61] "kindnet-6z4tn" [93ba591e-f91b-4d17-bc19-0df196548fdd] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:29:09.487092  340625 system_pods.go:61] "kube-apiserver-newest-cni-307728" [ff05d4da-e496-4611-90a2-32a9e49a76a5] Running
	I1227 20:29:09.487099  340625 system_pods.go:61] "kube-controller-manager-newest-cni-307728" [98a6898f-bd6c-4bb5-97eb-767920c25375] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:29:09.487106  340625 system_pods.go:61] "kube-proxy-9qccb" [7af7999b-ede9-4da5-8e6f-df77472e1cdd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:29:09.487112  340625 system_pods.go:61] "kube-scheduler-newest-cni-307728" [cac454d9-fa90-45da-b22c-5d0e23dc78a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:29:09.487118  340625 system_pods.go:61] "storage-provisioner" [b4c1fa65-07d5-4f68-a68b-43acd8569dd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:29:09.487134  340625 system_pods.go:74] duration metric: took 2.700198ms to wait for pod list to return data ...
	I1227 20:29:09.487143  340625 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:29:09.487389  340625 addons.go:530] duration metric: took 464.089625ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 20:29:09.489149  340625 default_sa.go:45] found service account: "default"
	I1227 20:29:09.489169  340625 default_sa.go:55] duration metric: took 2.021711ms for default service account to be created ...
	I1227 20:29:09.489210  340625 kubeadm.go:587] duration metric: took 465.974128ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:09.489252  340625 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:29:09.491423  340625 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:29:09.491450  340625 node_conditions.go:123] node cpu capacity is 8
	I1227 20:29:09.491467  340625 node_conditions.go:105] duration metric: took 2.208505ms to run NodePressure ...
	I1227 20:29:09.491480  340625 start.go:242] waiting for startup goroutines ...
	I1227 20:29:09.789848  340625 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-307728" context rescaled to 1 replicas
	I1227 20:29:09.789882  340625 start.go:247] waiting for cluster config update ...
	I1227 20:29:09.789893  340625 start.go:256] writing updated cluster config ...
	I1227 20:29:09.790191  340625 ssh_runner.go:195] Run: rm -f paused
	I1227 20:29:09.838247  340625 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:29:09.840556  340625 out.go:179] * Done! kubectl is now configured to use "newest-cni-307728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:28:59 newest-cni-307728 crio[777]: time="2025-12-27T20:28:59.876954672Z" level=info msg="Started container" PID=1218 containerID=a40bb549f1724c6f39e08bdaac1a97e7987e86bb3ef13eaae0d2f9d01a768d48 description=kube-system/etcd-newest-cni-307728/etcd id=6d71b3ba-1ec1-48bc-b441-871c4d55b569 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf598084fef40decad6f827880116d609c369fbc5c89eabcdab9784420121043
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.61214558Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-9qccb/POD" id=29125536-98a5-49ac-9ee4-6ddecf9ec110 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.612215644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.613604149Z" level=info msg="Running pod sandbox: kube-system/kindnet-6z4tn/POD" id=9621e2f8-bb84-4ade-a869-25fdb14f5123 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.613672707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.616367437Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=29125536-98a5-49ac-9ee4-6ddecf9ec110 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.616681771Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9621e2f8-bb84-4ade-a869-25fdb14f5123 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.617875455Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.618445566Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.618625467Z" level=info msg="Ran pod sandbox cd76a4497287604da7848bff6a8559773b594003cc5bde508a2c23a533f7e530 with infra container: kube-system/kube-proxy-9qccb/POD" id=29125536-98a5-49ac-9ee4-6ddecf9ec110 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.61934728Z" level=info msg="Ran pod sandbox 0a455d1d50b17db383186e3082d2d200e772538c9291af07ba27a74a60555099 with infra container: kube-system/kindnet-6z4tn/POD" id=9621e2f8-bb84-4ade-a869-25fdb14f5123 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.619788744Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=922d728c-fd74-4535-ba4a-4ce8a7ce90da name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.620316735Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=96cdc446-18b7-4518-b76c-79dd2b8cf954 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.620445099Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=96cdc446-18b7-4518-b76c-79dd2b8cf954 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.620496231Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=96cdc446-18b7-4518-b76c-79dd2b8cf954 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.620730489Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=95129384-b605-449f-b2d3-e38069e6e467 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.621403438Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=c5cc3836-33e2-43fd-a5bb-2626e3dd5e8d name=/runtime.v1.ImageService/PullImage
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.624642623Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.625467136Z" level=info msg="Creating container: kube-system/kube-proxy-9qccb/kube-proxy" id=81945333-e2ad-4243-86a7-34d1db739fce name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.625599362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.629570255Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.629974482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.660428326Z" level=info msg="Created container f6089a6f9d8d9c8681cd8665fe8ea04f27a0c01472b97151e1d26ff98d6df493: kube-system/kube-proxy-9qccb/kube-proxy" id=81945333-e2ad-4243-86a7-34d1db739fce name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.661082714Z" level=info msg="Starting container: f6089a6f9d8d9c8681cd8665fe8ea04f27a0c01472b97151e1d26ff98d6df493" id=00e8659e-3be9-40d8-845e-57995da81479 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:09 newest-cni-307728 crio[777]: time="2025-12-27T20:29:09.663669656Z" level=info msg="Started container" PID=1575 containerID=f6089a6f9d8d9c8681cd8665fe8ea04f27a0c01472b97151e1d26ff98d6df493 description=kube-system/kube-proxy-9qccb/kube-proxy id=00e8659e-3be9-40d8-845e-57995da81479 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd76a4497287604da7848bff6a8559773b594003cc5bde508a2c23a533f7e530
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f6089a6f9d8d9       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   1 second ago        Running             kube-proxy                0                   cd76a44972876       kube-proxy-9qccb                            kube-system
	09120eb1a46c1       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   11 seconds ago      Running             kube-controller-manager   0                   3251976d0039d       kube-controller-manager-newest-cni-307728   kube-system
	a40bb549f1724       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   11 seconds ago      Running             etcd                      0                   cf598084fef40       etcd-newest-cni-307728                      kube-system
	4d5c102b7b62e       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   11 seconds ago      Running             kube-apiserver            0                   57d54355c98e0       kube-apiserver-newest-cni-307728            kube-system
	7ffde3f067649       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   11 seconds ago      Running             kube-scheduler            0                   1bed72d19a227       kube-scheduler-newest-cni-307728            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-307728
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-307728
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=newest-cni-307728
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_29_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:29:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-307728
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:29:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:29:03 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:29:03 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:29:03 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 20:29:03 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-307728
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                b8493783-f7be-4c30-8a0f-ec2eeceb6491
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-307728                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-6z4tn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-307728             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-307728    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-9qccb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-307728             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-307728 event: Registered Node newest-cni-307728 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [a40bb549f1724c6f39e08bdaac1a97e7987e86bb3ef13eaae0d2f9d01a768d48] <==
	{"level":"info","ts":"2025-12-27T20:28:59.914851Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:29:00.406370Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T20:29:00.406435Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T20:29:00.406502Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-27T20:29:00.406524Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:29:00.406544Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:00.407292Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:00.407316Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:29:00.407335Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:00.407347Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:00.408040Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:newest-cni-307728 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:29:00.408092Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:29:00.408179Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:29:00.408119Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:29:00.408286Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:29:00.408313Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:29:00.408696Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:29:00.408802Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:29:00.408843Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:29:00.408881Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:29:00.409086Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T20:29:00.409562Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:29:00.409667Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:29:00.412595Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-27T20:29:00.412602Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:29:11 up  1:11,  0 user,  load average: 3.41, 3.21, 2.28
	Linux newest-cni-307728 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [4d5c102b7b62e473721fa0d1a477acb0fd6bd20fbc7a7a66e29c47b06075f4ea] <==
	I1227 20:29:01.454271       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1227 20:29:01.454664       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:01.455737       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:29:01.457791       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:29:01.457840       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1227 20:29:01.457940       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1227 20:29:01.462479       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:29:01.660990       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:29:02.359871       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:29:02.364331       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:29:02.364397       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:29:02.891236       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:29:02.930345       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:29:03.063900       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:29:03.070771       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1227 20:29:03.071981       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:29:03.076236       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:29:03.383214       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:29:03.944987       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:29:03.961175       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:29:03.969504       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:29:09.037072       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:29:09.088165       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:29:09.093005       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:29:09.284548       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [09120eb1a46c1c09f7a9011a5aff6569eb4ed4c4dbad4fe8097ba83fb109441c] <==
	I1227 20:29:08.190560       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.190727       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.190030       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-307728"
	I1227 20:29:08.190783       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 20:29:08.190929       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.190948       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.190955       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.190968       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.190965       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.191037       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.191326       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.191666       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.192130       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.192175       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.192201       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.192299       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.192336       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.192374       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.193159       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:08.197820       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-307728" podCIDRs=["10.42.0.0/24"]
	I1227 20:29:08.212710       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.290820       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:08.290840       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:29:08.290846       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:29:08.293954       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [f6089a6f9d8d9c8681cd8665fe8ea04f27a0c01472b97151e1d26ff98d6df493] <==
	I1227 20:29:09.698121       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:29:09.765553       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:09.867152       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:09.867204       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1227 20:29:09.867330       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:29:09.891217       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:29:09.891298       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:29:09.898886       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:29:09.899396       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:29:09.899418       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:29:09.900810       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:29:09.900843       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:29:09.900870       1 config.go:200] "Starting service config controller"
	I1227 20:29:09.900875       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:29:09.900938       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:29:09.900945       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:29:09.901199       1 config.go:309] "Starting node config controller"
	I1227 20:29:09.901221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:29:09.901238       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:29:10.001015       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:29:10.001002       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:29:10.001005       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7ffde3f067649ba741d322a4be40b8d48dc8431731120601f3b78193004026ac] <==
	E1227 20:29:01.423552       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:29:01.423540       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:29:01.423668       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:29:01.423738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:29:01.423814       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:29:01.423890       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:29:01.424047       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:29:01.424620       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:29:01.424965       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:29:01.425371       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:29:01.425789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:29:01.425828       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:29:01.426002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:29:01.426852       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:29:02.321098       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:29:02.340152       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:29:02.362546       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:29:02.380030       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:29:02.439275       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:29:02.511838       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:29:02.523440       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:29:02.634174       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:29:02.635596       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:29:02.813440       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1227 20:29:04.817477       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:29:04 newest-cni-307728 kubelet[1295]: E1227 20:29:04.888157    1295 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-307728\" already exists" pod="kube-system/kube-controller-manager-newest-cni-307728"
	Dec 27 20:29:04 newest-cni-307728 kubelet[1295]: E1227 20:29:04.888228    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-307728" containerName="kube-controller-manager"
	Dec 27 20:29:04 newest-cni-307728 kubelet[1295]: I1227 20:29:04.907064    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-307728" podStartSLOduration=1.9070444420000001 podStartE2EDuration="1.907044442s" podCreationTimestamp="2025-12-27 20:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:29:04.906010313 +0000 UTC m=+1.174711543" watchObservedRunningTime="2025-12-27 20:29:04.907044442 +0000 UTC m=+1.175745658"
	Dec 27 20:29:04 newest-cni-307728 kubelet[1295]: I1227 20:29:04.942259    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-307728" podStartSLOduration=1.942238681 podStartE2EDuration="1.942238681s" podCreationTimestamp="2025-12-27 20:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:29:04.92388639 +0000 UTC m=+1.192587612" watchObservedRunningTime="2025-12-27 20:29:04.942238681 +0000 UTC m=+1.210939906"
	Dec 27 20:29:04 newest-cni-307728 kubelet[1295]: I1227 20:29:04.942556    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-307728" podStartSLOduration=1.9425459790000001 podStartE2EDuration="1.942545979s" podCreationTimestamp="2025-12-27 20:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:29:04.941007446 +0000 UTC m=+1.209708666" watchObservedRunningTime="2025-12-27 20:29:04.942545979 +0000 UTC m=+1.211247197"
	Dec 27 20:29:04 newest-cni-307728 kubelet[1295]: I1227 20:29:04.952380    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-307728" podStartSLOduration=1.952370112 podStartE2EDuration="1.952370112s" podCreationTimestamp="2025-12-27 20:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:29:04.951780095 +0000 UTC m=+1.220481316" watchObservedRunningTime="2025-12-27 20:29:04.952370112 +0000 UTC m=+1.221071331"
	Dec 27 20:29:05 newest-cni-307728 kubelet[1295]: E1227 20:29:05.868036    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-307728" containerName="kube-apiserver"
	Dec 27 20:29:05 newest-cni-307728 kubelet[1295]: E1227 20:29:05.868168    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-307728" containerName="kube-controller-manager"
	Dec 27 20:29:05 newest-cni-307728 kubelet[1295]: E1227 20:29:05.868281    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-307728" containerName="kube-scheduler"
	Dec 27 20:29:05 newest-cni-307728 kubelet[1295]: E1227 20:29:05.868523    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-307728" containerName="etcd"
	Dec 27 20:29:06 newest-cni-307728 kubelet[1295]: E1227 20:29:06.869763    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-307728" containerName="etcd"
	Dec 27 20:29:06 newest-cni-307728 kubelet[1295]: E1227 20:29:06.869838    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-307728" containerName="kube-apiserver"
	Dec 27 20:29:06 newest-cni-307728 kubelet[1295]: E1227 20:29:06.870009    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-307728" containerName="kube-scheduler"
	Dec 27 20:29:07 newest-cni-307728 kubelet[1295]: E1227 20:29:07.871816    1295 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-307728" containerName="etcd"
	Dec 27 20:29:08 newest-cni-307728 kubelet[1295]: I1227 20:29:08.292573    1295 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 20:29:08 newest-cni-307728 kubelet[1295]: I1227 20:29:08.293631    1295 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 20:29:09 newest-cni-307728 kubelet[1295]: I1227 20:29:09.362055    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7af7999b-ede9-4da5-8e6f-df77472e1cdd-kube-proxy\") pod \"kube-proxy-9qccb\" (UID: \"7af7999b-ede9-4da5-8e6f-df77472e1cdd\") " pod="kube-system/kube-proxy-9qccb"
	Dec 27 20:29:09 newest-cni-307728 kubelet[1295]: I1227 20:29:09.362109    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/93ba591e-f91b-4d17-bc19-0df196548fdd-cni-cfg\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:09 newest-cni-307728 kubelet[1295]: I1227 20:29:09.362203    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93ba591e-f91b-4d17-bc19-0df196548fdd-lib-modules\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:09 newest-cni-307728 kubelet[1295]: I1227 20:29:09.362260    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93ba591e-f91b-4d17-bc19-0df196548fdd-xtables-lock\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:09 newest-cni-307728 kubelet[1295]: I1227 20:29:09.362280    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7af7999b-ede9-4da5-8e6f-df77472e1cdd-xtables-lock\") pod \"kube-proxy-9qccb\" (UID: \"7af7999b-ede9-4da5-8e6f-df77472e1cdd\") " pod="kube-system/kube-proxy-9qccb"
	Dec 27 20:29:09 newest-cni-307728 kubelet[1295]: I1227 20:29:09.362301    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7af7999b-ede9-4da5-8e6f-df77472e1cdd-lib-modules\") pod \"kube-proxy-9qccb\" (UID: \"7af7999b-ede9-4da5-8e6f-df77472e1cdd\") " pod="kube-system/kube-proxy-9qccb"
	Dec 27 20:29:09 newest-cni-307728 kubelet[1295]: I1227 20:29:09.362324    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhzx\" (UniqueName: \"kubernetes.io/projected/7af7999b-ede9-4da5-8e6f-df77472e1cdd-kube-api-access-pkhzx\") pod \"kube-proxy-9qccb\" (UID: \"7af7999b-ede9-4da5-8e6f-df77472e1cdd\") " pod="kube-system/kube-proxy-9qccb"
	Dec 27 20:29:09 newest-cni-307728 kubelet[1295]: I1227 20:29:09.362355    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx674\" (UniqueName: \"kubernetes.io/projected/93ba591e-f91b-4d17-bc19-0df196548fdd-kube-api-access-fx674\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:09 newest-cni-307728 kubelet[1295]: I1227 20:29:09.893546    1295 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-9qccb" podStartSLOduration=0.893519949 podStartE2EDuration="893.519949ms" podCreationTimestamp="2025-12-27 20:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:29:09.893345037 +0000 UTC m=+6.162046258" watchObservedRunningTime="2025-12-27 20:29:09.893519949 +0000 UTC m=+6.162221174"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-307728 -n newest-cni-307728
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-307728 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-v4xtw storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-307728 describe pod coredns-7d764666f9-v4xtw storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-307728 describe pod coredns-7d764666f9-v4xtw storage-provisioner: exit status 1 (61.051755ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-v4xtw" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-307728 describe pod coredns-7d764666f9-v4xtw storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-820583 --alsologtostderr -v=1
E1227 20:29:25.532681   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-820583 --alsologtostderr -v=1: exit status 80 (2.087668162s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-820583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:29:24.859678  350281 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:24.859958  350281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:24.859968  350281 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:24.859973  350281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:24.860143  350281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:24.860389  350281 out.go:368] Setting JSON to false
	I1227 20:29:24.860407  350281 mustload.go:66] Loading cluster: embed-certs-820583
	I1227 20:29:24.860713  350281 config.go:182] Loaded profile config "embed-certs-820583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:24.861130  350281 cli_runner.go:164] Run: docker container inspect embed-certs-820583 --format={{.State.Status}}
	I1227 20:29:24.878343  350281 host.go:66] Checking if "embed-certs-820583" exists ...
	I1227 20:29:24.878548  350281 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:24.932384  350281 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-27 20:29:24.922780625 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:24.933056  350281 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-820583 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:29:24.934600  350281 out.go:179] * Pausing node embed-certs-820583 ... 
	I1227 20:29:24.935695  350281 host.go:66] Checking if "embed-certs-820583" exists ...
	I1227 20:29:24.935985  350281 ssh_runner.go:195] Run: systemctl --version
	I1227 20:29:24.936022  350281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-820583
	I1227 20:29:24.953569  350281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/embed-certs-820583/id_rsa Username:docker}
	I1227 20:29:25.043135  350281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:25.054749  350281 pause.go:52] kubelet running: true
	I1227 20:29:25.054819  350281 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:25.209645  350281 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:25.209730  350281 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:25.271854  350281 cri.go:96] found id: "756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77"
	I1227 20:29:25.271877  350281 cri.go:96] found id: "4b55dea85cd2565470f3247490decce7f2b26de31e75677c7df7a52888274a5b"
	I1227 20:29:25.271884  350281 cri.go:96] found id: "7b54f2b9658d2fdabee13ddc7f55fc68dce492b532bd8eaf9c2f7464327a49f2"
	I1227 20:29:25.271888  350281 cri.go:96] found id: "c63598d0016972749ece5b7864b1d64b6047dab84a3a4e96f0e215bfa0652ee1"
	I1227 20:29:25.271891  350281 cri.go:96] found id: "7e8536f5d1391b16110203289a4355f73f4506be70a40d4c701bec3c60c025b6"
	I1227 20:29:25.271894  350281 cri.go:96] found id: "e3219756543583e2bc1bc017b95edaeb3245180fdb7e19b51f610a80a7bdaf8f"
	I1227 20:29:25.271897  350281 cri.go:96] found id: "c920e23ef438912ca2e52bd7890591042bc74ed5ef30728cbbd1281035c27d3e"
	I1227 20:29:25.271900  350281 cri.go:96] found id: "7d0b7a7e858d75dabf988a9b76fa95b39147d055357b9b794f4486154f20ba5a"
	I1227 20:29:25.271903  350281 cri.go:96] found id: "383462ccad1518d79addd7cf9399f63bb733b06b0d9e1d6abe521c276e668b92"
	I1227 20:29:25.271908  350281 cri.go:96] found id: "2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf"
	I1227 20:29:25.271927  350281 cri.go:96] found id: "9a9fce26c1c18179e0c6750a04cb5c5c3f21bedaad9787d097befb5daf933a74"
	I1227 20:29:25.271933  350281 cri.go:96] found id: ""
	I1227 20:29:25.271976  350281 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:25.283309  350281 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:25Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:29:25.465722  350281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:25.478175  350281 pause.go:52] kubelet running: false
	I1227 20:29:25.478276  350281 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:25.606230  350281 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:25.606324  350281 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:25.674956  350281 cri.go:96] found id: "756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77"
	I1227 20:29:25.674980  350281 cri.go:96] found id: "4b55dea85cd2565470f3247490decce7f2b26de31e75677c7df7a52888274a5b"
	I1227 20:29:25.674984  350281 cri.go:96] found id: "7b54f2b9658d2fdabee13ddc7f55fc68dce492b532bd8eaf9c2f7464327a49f2"
	I1227 20:29:25.674987  350281 cri.go:96] found id: "c63598d0016972749ece5b7864b1d64b6047dab84a3a4e96f0e215bfa0652ee1"
	I1227 20:29:25.674990  350281 cri.go:96] found id: "7e8536f5d1391b16110203289a4355f73f4506be70a40d4c701bec3c60c025b6"
	I1227 20:29:25.674993  350281 cri.go:96] found id: "e3219756543583e2bc1bc017b95edaeb3245180fdb7e19b51f610a80a7bdaf8f"
	I1227 20:29:25.674996  350281 cri.go:96] found id: "c920e23ef438912ca2e52bd7890591042bc74ed5ef30728cbbd1281035c27d3e"
	I1227 20:29:25.674999  350281 cri.go:96] found id: "7d0b7a7e858d75dabf988a9b76fa95b39147d055357b9b794f4486154f20ba5a"
	I1227 20:29:25.675002  350281 cri.go:96] found id: "383462ccad1518d79addd7cf9399f63bb733b06b0d9e1d6abe521c276e668b92"
	I1227 20:29:25.675018  350281 cri.go:96] found id: "2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf"
	I1227 20:29:25.675020  350281 cri.go:96] found id: "9a9fce26c1c18179e0c6750a04cb5c5c3f21bedaad9787d097befb5daf933a74"
	I1227 20:29:25.675023  350281 cri.go:96] found id: ""
	I1227 20:29:25.675060  350281 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:25.991107  350281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:26.003725  350281 pause.go:52] kubelet running: false
	I1227 20:29:26.003776  350281 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:26.132173  350281 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:26.132247  350281 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:26.197390  350281 cri.go:96] found id: "756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77"
	I1227 20:29:26.197414  350281 cri.go:96] found id: "4b55dea85cd2565470f3247490decce7f2b26de31e75677c7df7a52888274a5b"
	I1227 20:29:26.197418  350281 cri.go:96] found id: "7b54f2b9658d2fdabee13ddc7f55fc68dce492b532bd8eaf9c2f7464327a49f2"
	I1227 20:29:26.197422  350281 cri.go:96] found id: "c63598d0016972749ece5b7864b1d64b6047dab84a3a4e96f0e215bfa0652ee1"
	I1227 20:29:26.197425  350281 cri.go:96] found id: "7e8536f5d1391b16110203289a4355f73f4506be70a40d4c701bec3c60c025b6"
	I1227 20:29:26.197428  350281 cri.go:96] found id: "e3219756543583e2bc1bc017b95edaeb3245180fdb7e19b51f610a80a7bdaf8f"
	I1227 20:29:26.197430  350281 cri.go:96] found id: "c920e23ef438912ca2e52bd7890591042bc74ed5ef30728cbbd1281035c27d3e"
	I1227 20:29:26.197434  350281 cri.go:96] found id: "7d0b7a7e858d75dabf988a9b76fa95b39147d055357b9b794f4486154f20ba5a"
	I1227 20:29:26.197436  350281 cri.go:96] found id: "383462ccad1518d79addd7cf9399f63bb733b06b0d9e1d6abe521c276e668b92"
	I1227 20:29:26.197442  350281 cri.go:96] found id: "2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf"
	I1227 20:29:26.197445  350281 cri.go:96] found id: "9a9fce26c1c18179e0c6750a04cb5c5c3f21bedaad9787d097befb5daf933a74"
	I1227 20:29:26.197447  350281 cri.go:96] found id: ""
	I1227 20:29:26.197482  350281 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:26.652536  350281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:26.665069  350281 pause.go:52] kubelet running: false
	I1227 20:29:26.665124  350281 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:26.807083  350281 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:26.807159  350281 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:26.873834  350281 cri.go:96] found id: "756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77"
	I1227 20:29:26.873855  350281 cri.go:96] found id: "4b55dea85cd2565470f3247490decce7f2b26de31e75677c7df7a52888274a5b"
	I1227 20:29:26.873862  350281 cri.go:96] found id: "7b54f2b9658d2fdabee13ddc7f55fc68dce492b532bd8eaf9c2f7464327a49f2"
	I1227 20:29:26.873875  350281 cri.go:96] found id: "c63598d0016972749ece5b7864b1d64b6047dab84a3a4e96f0e215bfa0652ee1"
	I1227 20:29:26.873879  350281 cri.go:96] found id: "7e8536f5d1391b16110203289a4355f73f4506be70a40d4c701bec3c60c025b6"
	I1227 20:29:26.873884  350281 cri.go:96] found id: "e3219756543583e2bc1bc017b95edaeb3245180fdb7e19b51f610a80a7bdaf8f"
	I1227 20:29:26.873888  350281 cri.go:96] found id: "c920e23ef438912ca2e52bd7890591042bc74ed5ef30728cbbd1281035c27d3e"
	I1227 20:29:26.873893  350281 cri.go:96] found id: "7d0b7a7e858d75dabf988a9b76fa95b39147d055357b9b794f4486154f20ba5a"
	I1227 20:29:26.873897  350281 cri.go:96] found id: "383462ccad1518d79addd7cf9399f63bb733b06b0d9e1d6abe521c276e668b92"
	I1227 20:29:26.873904  350281 cri.go:96] found id: "2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf"
	I1227 20:29:26.873908  350281 cri.go:96] found id: "9a9fce26c1c18179e0c6750a04cb5c5c3f21bedaad9787d097befb5daf933a74"
	I1227 20:29:26.873924  350281 cri.go:96] found id: ""
	I1227 20:29:26.873971  350281 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:26.888173  350281 out.go:203] 
	W1227 20:29:26.889129  350281 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:29:26.889143  350281 out.go:285] * 
	* 
	W1227 20:29:26.890781  350281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:29:26.891878  350281 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-820583 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-820583
helpers_test.go:244: (dbg) docker inspect embed-certs-820583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e",
	        "Created": "2025-12-27T20:27:28.471289119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:28:28.114126216Z",
	            "FinishedAt": "2025-12-27T20:28:26.999562031Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/hosts",
	        "LogPath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e-json.log",
	        "Name": "/embed-certs-820583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-820583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-820583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e",
	                "LowerDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-820583",
	                "Source": "/var/lib/docker/volumes/embed-certs-820583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-820583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-820583",
	                "name.minikube.sigs.k8s.io": "embed-certs-820583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "09809101a002573ccb5b214bd2bfa63423368e6d9898c911a22fd207c83a1d41",
	            "SandboxKey": "/var/run/docker/netns/09809101a002",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-820583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "df613bfb14c3c19de8431bee4bfb1a435f82a062a92d1a7c32f9d573cfc5cc6e",
	                    "EndpointID": "f66d7a3352036d69b48ac350e0a57f138369f9d4ec2b2a00be60fc835c093225",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "c2:12:15:f2:1c:e1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-820583",
	                        "fc43585f1b09"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-820583 -n embed-certs-820583
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-820583 -n embed-certs-820583: exit status 2 (324.436957ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-820583 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-820583 logs -n 25: (1.075796062s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ stop    │ -p default-k8s-diff-port-954154 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ image   │ old-k8s-version-762177 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ pause   │ -p old-k8s-version-762177 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-954154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ no-preload-014435 image list --format=json                                                                                                                                                                                                    │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p no-preload-014435 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-588477 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p newest-cni-307728 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p test-preload-dl-gcs-588477                                                                                                                                                                                                                 │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-github-805734 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-github-805734                                                                                                                                                                                                              │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-275955 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-275955                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-307728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ image   │ embed-certs-820583 image list --format=json                                                                                                                                                                                                   │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p embed-certs-820583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:29:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:29:22.784538  349640 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:22.784794  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.784803  349640 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:22.784808  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.785052  349640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:22.785520  349640 out.go:368] Setting JSON to false
	I1227 20:29:22.786562  349640 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4312,"bootTime":1766863051,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:29:22.786612  349640 start.go:143] virtualization: kvm guest
	I1227 20:29:22.788250  349640 out.go:179] * [newest-cni-307728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:29:22.789332  349640 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:29:22.789351  349640 notify.go:221] Checking for updates...
	I1227 20:29:22.791442  349640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:29:22.792602  349640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:22.793592  349640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:29:22.794578  349640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:29:22.795545  349640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:29:22.796871  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:22.797487  349640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:29:22.820540  349640 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:29:22.820686  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.876976  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.867077037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.877116  349640 docker.go:319] overlay module found
	I1227 20:29:22.878722  349640 out.go:179] * Using the docker driver based on existing profile
	I1227 20:29:22.879763  349640 start.go:309] selected driver: docker
	I1227 20:29:22.879776  349640 start.go:928] validating driver "docker" against &{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.879862  349640 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:29:22.880423  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.933111  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.923700326 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.933397  349640 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:22.933437  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:22.933495  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:22.933527  349640 start.go:353] cluster config:
	{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.935838  349640 out.go:179] * Starting "newest-cni-307728" primary control-plane node in "newest-cni-307728" cluster
	I1227 20:29:22.936870  349640 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:29:22.938035  349640 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:29:22.939178  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:22.939218  349640 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:29:22.939230  349640 cache.go:65] Caching tarball of preloaded images
	I1227 20:29:22.939273  349640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:29:22.939310  349640 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:29:22.939321  349640 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:29:22.939415  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:22.958953  349640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:29:22.958973  349640 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:29:22.958989  349640 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:29:22.959021  349640 start.go:360] acquireMachinesLock for newest-cni-307728: {Name:mk68119d6288f8bd1ffbe980f508592c691efe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:29:22.959080  349640 start.go:364] duration metric: took 32.398µs to acquireMachinesLock for "newest-cni-307728"
	I1227 20:29:22.959096  349640 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:29:22.959101  349640 fix.go:54] fixHost starting: 
	I1227 20:29:22.959287  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:22.976170  349640 fix.go:112] recreateIfNeeded on newest-cni-307728: state=Stopped err=<nil>
	W1227 20:29:22.976196  349640 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 27 20:28:57 embed-certs-820583 crio[569]: time="2025-12-27T20:28:57.137096647Z" level=info msg="Started container" PID=1774 containerID=90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper id=bfa18776-bd97-4573-869b-da66eeca983a name=/runtime.v1.RuntimeService/StartContainer sandboxID=6910c9f4b950710f274f042c84116322c028121d818ee759f6327837a88c5962
	Dec 27 20:28:57 embed-certs-820583 crio[569]: time="2025-12-27T20:28:57.180907995Z" level=info msg="Removing container: b15a00182e38573708f6027521c5733bc87e7d5fe16f2b0facb8ad5d8a552fbf" id=84c93cf9-451a-4025-ad22-2bc939786d21 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:57 embed-certs-820583 crio[569]: time="2025-12-27T20:28:57.193506499Z" level=info msg="Removed container b15a00182e38573708f6027521c5733bc87e7d5fe16f2b0facb8ad5d8a552fbf: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper" id=84c93cf9-451a-4025-ad22-2bc939786d21 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.208050525Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=15fd98cd-722e-46e7-8790-c0e6b3ef99c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.209009644Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bab4fab5-444e-4c46-be5c-5c019227cb8f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.210357845Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b7c63a0a-8c8e-431f-bd26-5da8af0d66f5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.210492233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.216118944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.216302671Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1abaff484001dcfcde82469159e075d4bdbf64ae7d8d1db0623ae52af2c9c236/merged/etc/passwd: no such file or directory"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.216333898Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1abaff484001dcfcde82469159e075d4bdbf64ae7d8d1db0623ae52af2c9c236/merged/etc/group: no such file or directory"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.216887677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.236726162Z" level=info msg="Created container 756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77: kube-system/storage-provisioner/storage-provisioner" id=b7c63a0a-8c8e-431f-bd26-5da8af0d66f5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.237484167Z" level=info msg="Starting container: 756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77" id=f9f428ce-4b1a-4755-b85f-344ae11afb53 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.239868733Z" level=info msg="Started container" PID=1789 containerID=756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77 description=kube-system/storage-provisioner/storage-provisioner id=f9f428ce-4b1a-4755-b85f-344ae11afb53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=839c645e5467f44de4a2d575b7ce4088dc8d55bed98c43cf204ada7a51e30f73
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.092514047Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9d5a3b18-d3df-4b41-8888-24b63b878112 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.093569106Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6fc3e515-9a3b-4a4b-a60d-b0ed1f5ecb95 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.094547684Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper" id=5d47e4be-6d4f-4fb4-ab36-e21ef8235cd9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.094696279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.101364414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.101795508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.130407181Z" level=info msg="Created container 2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper" id=5d47e4be-6d4f-4fb4-ab36-e21ef8235cd9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.130952011Z" level=info msg="Starting container: 2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf" id=ef4fcaab-07aa-46cf-9f70-85082974c553 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.13257012Z" level=info msg="Started container" PID=1829 containerID=2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper id=ef4fcaab-07aa-46cf-9f70-85082974c553 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6910c9f4b950710f274f042c84116322c028121d818ee759f6327837a88c5962
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.255570453Z" level=info msg="Removing container: 90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2" id=41222281-cf94-4d16-8ba6-dd456550dbab name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.264612994Z" level=info msg="Removed container 90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper" id=41222281-cf94-4d16-8ba6-dd456550dbab name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2d629d4ced2ba       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   6910c9f4b9507       dashboard-metrics-scraper-867fb5f87b-lw2jd   kubernetes-dashboard
	756c2ccbc5820       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   839c645e5467f       storage-provisioner                          kube-system
	9a9fce26c1c18       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   60129d86029cc       kubernetes-dashboard-b84665fb8-2hqqv         kubernetes-dashboard
	db389266c3bcb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   33ef4830dce7e       busybox                                      default
	4b55dea85cd25       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           51 seconds ago      Running             coredns                     0                   ddae4493d8d4f       coredns-7d764666f9-nvnjg                     kube-system
	7b54f2b9658d2       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           51 seconds ago      Running             kube-proxy                  0                   3643b6ad585a6       kube-proxy-srwxn                             kube-system
	c63598d001697       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   839c645e5467f       storage-provisioner                          kube-system
	7e8536f5d1391       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   0ecee6153100d       kindnet-6d59t                                kube-system
	e321975654358       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           53 seconds ago      Running             etcd                        0                   a9b3b47106e24       etcd-embed-certs-820583                      kube-system
	c920e23ef4389       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           53 seconds ago      Running             kube-controller-manager     0                   5f542d34d6914       kube-controller-manager-embed-certs-820583   kube-system
	7d0b7a7e858d7       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           53 seconds ago      Running             kube-apiserver              0                   da4ba350f1424       kube-apiserver-embed-certs-820583            kube-system
	383462ccad151       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           53 seconds ago      Running             kube-scheduler              0                   05d9346fd0a42       kube-scheduler-embed-certs-820583            kube-system
	
	
	==> coredns [4b55dea85cd2565470f3247490decce7f2b26de31e75677c7df7a52888274a5b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44089 - 28196 "HINFO IN 4160130662824686331.2662000661116471141. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.093595256s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-820583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-820583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=embed-certs-820583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_27_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:27:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-820583
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:29:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:29:06 +0000   Sat, 27 Dec 2025 20:27:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:29:06 +0000   Sat, 27 Dec 2025 20:27:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:29:06 +0000   Sat, 27 Dec 2025 20:27:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:29:06 +0000   Sat, 27 Dec 2025 20:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-820583
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                41c5c9fb-06be-4108-9630-9ada526cc117
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-7d764666f9-nvnjg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-820583                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-6d59t                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-820583             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-embed-certs-820583    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-srwxn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-820583             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-lw2jd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-2hqqv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  104s  node-controller  Node embed-certs-820583 event: Registered Node embed-certs-820583 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node embed-certs-820583 event: Registered Node embed-certs-820583 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [e3219756543583e2bc1bc017b95edaeb3245180fdb7e19b51f610a80a7bdaf8f] <==
	{"level":"info","ts":"2025-12-27T20:28:34.659862Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:28:34.659902Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:28:34.659983Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:28:34.660083Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:28:34.661370Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:28:35.151245Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:35.151291Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:35.151365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:35.151391Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:35.151407Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:35.152093Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:35.152118Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:35.152135Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:35.152146Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:35.153124Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-820583 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:28:35.153145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:35.153160Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:35.153364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:35.153379Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:35.154300Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:35.154443Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:35.157884Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:28:35.158025Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:28:52.831596Z","caller":"traceutil/trace.go:172","msg":"trace[498205144] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"138.54048ms","start":"2025-12-27T20:28:52.693035Z","end":"2025-12-27T20:28:52.831575Z","steps":["trace[498205144] 'process raft request'  (duration: 138.384926ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:28:52.974932Z","caller":"traceutil/trace.go:172","msg":"trace[304725984] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"133.539584ms","start":"2025-12-27T20:28:52.841356Z","end":"2025-12-27T20:28:52.974895Z","steps":["trace[304725984] 'process raft request'  (duration: 133.423195ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:29:28 up  1:11,  0 user,  load average: 2.78, 3.08, 2.26
	Linux embed-certs-820583 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e8536f5d1391b16110203289a4355f73f4506be70a40d4c701bec3c60c025b6] <==
	I1227 20:28:36.719055       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:28:36.719324       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:28:36.719467       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:28:36.719487       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:28:36.719508       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:28:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:28:36.825427       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:28:36.826247       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:28:36.826257       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:28:36.826456       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:28:37.179633       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:28:37.179668       1 metrics.go:72] Registering metrics
	I1227 20:28:37.179753       1 controller.go:711] "Syncing nftables rules"
	I1227 20:28:46.826149       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:28:46.826217       1 main.go:301] handling current node
	I1227 20:28:56.829446       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:28:56.829491       1 main.go:301] handling current node
	I1227 20:29:06.825653       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:29:06.825690       1 main.go:301] handling current node
	I1227 20:29:16.825743       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:29:16.825800       1 main.go:301] handling current node
	I1227 20:29:26.827052       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:29:26.827085       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7d0b7a7e858d75dabf988a9b76fa95b39147d055357b9b794f4486154f20ba5a] <==
	I1227 20:28:36.079867       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:36.079900       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:28:36.079958       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:28:36.079906       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:28:36.080369       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:28:36.080413       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:28:36.080432       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:28:36.080438       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:28:36.080443       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:28:36.080411       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1227 20:28:36.085419       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:28:36.086602       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:28:36.088964       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:28:36.112216       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:28:36.112468       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:28:36.354700       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:28:36.381739       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:28:36.408189       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:28:36.420820       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:28:36.491749       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.10.44"}
	I1227 20:28:36.504296       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.203.144"}
	I1227 20:28:36.983224       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:28:39.654536       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:28:39.801867       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:28:39.904849       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c920e23ef438912ca2e52bd7890591042bc74ed5ef30728cbbd1281035c27d3e] <==
	I1227 20:28:39.214524       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.214643       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.214817       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.215002       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.215161       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.215596       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.216006       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.217007       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.218497       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.218814       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.219067       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:28:39.219231       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-820583"
	I1227 20:28:39.219371       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:28:39.220884       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.220895       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221006       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221720       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221732       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221835       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221850       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:28:39.221856       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:28:39.221720       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221973       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.224035       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.311980       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [7b54f2b9658d2fdabee13ddc7f55fc68dce492b532bd8eaf9c2f7464327a49f2] <==
	I1227 20:28:36.507863       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:28:36.579344       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:36.680389       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:36.680431       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:28:36.680568       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:28:36.699583       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:28:36.699643       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:28:36.704593       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:28:36.704987       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:28:36.705004       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:36.706270       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:28:36.706299       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:28:36.706385       1 config.go:309] "Starting node config controller"
	I1227 20:28:36.706400       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:28:36.706408       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:28:36.706407       1 config.go:200] "Starting service config controller"
	I1227 20:28:36.706418       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:28:36.706451       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:28:36.706474       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:28:36.806581       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:28:36.806607       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:28:36.806609       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [383462ccad1518d79addd7cf9399f63bb733b06b0d9e1d6abe521c276e668b92] <==
	I1227 20:28:34.788337       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:28:35.999577       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:28:35.999651       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:28:35.999666       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:28:35.999676       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:28:36.030168       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:28:36.030264       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:36.033477       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:28:36.033519       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:36.034307       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:28:36.034408       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:28:36.135772       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:28:50 embed-certs-820583 kubelet[737]: E1227 20:28:50.159744     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-820583" containerName="kube-apiserver"
	Dec 27 20:28:50 embed-certs-820583 kubelet[737]: E1227 20:28:50.159868     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-820583" containerName="kube-scheduler"
	Dec 27 20:28:52 embed-certs-820583 kubelet[737]: E1227 20:28:52.684021     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-820583" containerName="kube-controller-manager"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: E1227 20:28:57.089671     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: I1227 20:28:57.089723     737 scope.go:122] "RemoveContainer" containerID="b15a00182e38573708f6027521c5733bc87e7d5fe16f2b0facb8ad5d8a552fbf"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: I1227 20:28:57.179397     737 scope.go:122] "RemoveContainer" containerID="b15a00182e38573708f6027521c5733bc87e7d5fe16f2b0facb8ad5d8a552fbf"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: E1227 20:28:57.179640     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: I1227 20:28:57.179674     737 scope.go:122] "RemoveContainer" containerID="90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: E1227 20:28:57.179862     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lw2jd_kubernetes-dashboard(59d06d56-971b-4ead-ae8b-d6ad7c1db340)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" podUID="59d06d56-971b-4ead-ae8b-d6ad7c1db340"
	Dec 27 20:28:58 embed-certs-820583 kubelet[737]: E1227 20:28:58.183309     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:58 embed-certs-820583 kubelet[737]: I1227 20:28:58.183349     737 scope.go:122] "RemoveContainer" containerID="90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2"
	Dec 27 20:28:58 embed-certs-820583 kubelet[737]: E1227 20:28:58.183540     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lw2jd_kubernetes-dashboard(59d06d56-971b-4ead-ae8b-d6ad7c1db340)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" podUID="59d06d56-971b-4ead-ae8b-d6ad7c1db340"
	Dec 27 20:29:07 embed-certs-820583 kubelet[737]: I1227 20:29:07.207543     737 scope.go:122] "RemoveContainer" containerID="c63598d0016972749ece5b7864b1d64b6047dab84a3a4e96f0e215bfa0652ee1"
	Dec 27 20:29:11 embed-certs-820583 kubelet[737]: E1227 20:29:11.231730     737 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvnjg" containerName="coredns"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: E1227 20:29:24.091255     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: I1227 20:29:24.091315     737 scope.go:122] "RemoveContainer" containerID="90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: I1227 20:29:24.254352     737 scope.go:122] "RemoveContainer" containerID="90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: E1227 20:29:24.254596     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: I1227 20:29:24.254623     737 scope.go:122] "RemoveContainer" containerID="2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: E1227 20:29:24.254819     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lw2jd_kubernetes-dashboard(59d06d56-971b-4ead-ae8b-d6ad7c1db340)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" podUID="59d06d56-971b-4ead-ae8b-d6ad7c1db340"
	Dec 27 20:29:25 embed-certs-820583 kubelet[737]: I1227 20:29:25.186524     737 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 20:29:25 embed-certs-820583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:29:25 embed-certs-820583 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:29:25 embed-certs-820583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:29:25 embed-certs-820583 systemd[1]: kubelet.service: Consumed 1.660s CPU time.
	
	
	==> kubernetes-dashboard [9a9fce26c1c18179e0c6750a04cb5c5c3f21bedaad9787d097befb5daf933a74] <==
	2025/12/27 20:28:43 Using namespace: kubernetes-dashboard
	2025/12/27 20:28:43 Using in-cluster config to connect to apiserver
	2025/12/27 20:28:43 Using secret token for csrf signing
	2025/12/27 20:28:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:28:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:28:43 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:28:43 Generating JWE encryption key
	2025/12/27 20:28:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:28:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:28:43 Initializing JWE encryption key from synchronized object
	2025/12/27 20:28:43 Creating in-cluster Sidecar client
	2025/12/27 20:28:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:43 Serving insecurely on HTTP port: 9090
	2025/12/27 20:29:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:43 Starting overwatch
	
	
	==> storage-provisioner [756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77] <==
	I1227 20:29:07.253679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:29:07.263044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:29:07.263090       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:29:07.265554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:10.721741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:14.982720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:18.580764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:21.634998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:24.657493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:24.661509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:29:24.661670       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:29:24.661821       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-820583_f5665e07-d2c2-4240-ab64-b21ccab48bbe!
	I1227 20:29:24.661887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6894b207-1c50-480d-809b-b77065e433a4", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-820583_f5665e07-d2c2-4240-ab64-b21ccab48bbe became leader
	W1227 20:29:24.664299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:24.667318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:29:24.762094       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-820583_f5665e07-d2c2-4240-ab64-b21ccab48bbe!
	W1227 20:29:26.671115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:26.676154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c63598d0016972749ece5b7864b1d64b6047dab84a3a4e96f0e215bfa0652ee1] <==
	I1227 20:28:36.466004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:29:06.471569       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-820583 -n embed-certs-820583
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-820583 -n embed-certs-820583: exit status 2 (349.301364ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-820583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-820583
helpers_test.go:244: (dbg) docker inspect embed-certs-820583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e",
	        "Created": "2025-12-27T20:27:28.471289119Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:28:28.114126216Z",
	            "FinishedAt": "2025-12-27T20:28:26.999562031Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/hosts",
	        "LogPath": "/var/lib/docker/containers/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e/fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e-json.log",
	        "Name": "/embed-certs-820583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-820583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-820583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc43585f1b095a24e4d5281b7099be3121c858459b1fabcb0807e1a2619a177e",
	                "LowerDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/424ba02ad85ff8524d34e330df411d8326c522c5d62b5ecf2250fad536699b47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-820583",
	                "Source": "/var/lib/docker/volumes/embed-certs-820583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-820583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-820583",
	                "name.minikube.sigs.k8s.io": "embed-certs-820583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "09809101a002573ccb5b214bd2bfa63423368e6d9898c911a22fd207c83a1d41",
	            "SandboxKey": "/var/run/docker/netns/09809101a002",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-820583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "df613bfb14c3c19de8431bee4bfb1a435f82a062a92d1a7c32f9d573cfc5cc6e",
	                    "EndpointID": "f66d7a3352036d69b48ac350e0a57f138369f9d4ec2b2a00be60fc835c093225",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "c2:12:15:f2:1c:e1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-820583",
	                        "fc43585f1b09"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-820583 -n embed-certs-820583
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-820583 -n embed-certs-820583: exit status 2 (349.853721ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-820583 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-820583 logs -n 25: (1.271422235s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ stop    │ -p default-k8s-diff-port-954154 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ image   │ old-k8s-version-762177 image list --format=json                                                                                                                                                                                               │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ pause   │ -p old-k8s-version-762177 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-954154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ no-preload-014435 image list --format=json                                                                                                                                                                                                    │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p no-preload-014435 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-588477 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p newest-cni-307728 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p test-preload-dl-gcs-588477                                                                                                                                                                                                                 │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-github-805734 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-github-805734                                                                                                                                                                                                              │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-275955 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-275955                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-307728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ image   │ embed-certs-820583 image list --format=json                                                                                                                                                                                                   │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p embed-certs-820583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:29:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:29:22.784538  349640 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:22.784794  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.784803  349640 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:22.784808  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.785052  349640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:22.785520  349640 out.go:368] Setting JSON to false
	I1227 20:29:22.786562  349640 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4312,"bootTime":1766863051,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:29:22.786612  349640 start.go:143] virtualization: kvm guest
	I1227 20:29:22.788250  349640 out.go:179] * [newest-cni-307728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:29:22.789332  349640 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:29:22.789351  349640 notify.go:221] Checking for updates...
	I1227 20:29:22.791442  349640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:29:22.792602  349640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:22.793592  349640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:29:22.794578  349640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:29:22.795545  349640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:29:22.796871  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:22.797487  349640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:29:22.820540  349640 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:29:22.820686  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.876976  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.867077037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.877116  349640 docker.go:319] overlay module found
	I1227 20:29:22.878722  349640 out.go:179] * Using the docker driver based on existing profile
	I1227 20:29:22.879763  349640 start.go:309] selected driver: docker
	I1227 20:29:22.879776  349640 start.go:928] validating driver "docker" against &{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.879862  349640 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:29:22.880423  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.933111  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.923700326 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.933397  349640 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:22.933437  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:22.933495  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:22.933527  349640 start.go:353] cluster config:
	{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.935838  349640 out.go:179] * Starting "newest-cni-307728" primary control-plane node in "newest-cni-307728" cluster
	I1227 20:29:22.936870  349640 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:29:22.938035  349640 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:29:22.939178  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:22.939218  349640 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:29:22.939230  349640 cache.go:65] Caching tarball of preloaded images
	I1227 20:29:22.939273  349640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:29:22.939310  349640 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:29:22.939321  349640 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:29:22.939415  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:22.958953  349640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:29:22.958973  349640 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:29:22.958989  349640 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:29:22.959021  349640 start.go:360] acquireMachinesLock for newest-cni-307728: {Name:mk68119d6288f8bd1ffbe980f508592c691efe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:29:22.959080  349640 start.go:364] duration metric: took 32.398µs to acquireMachinesLock for "newest-cni-307728"
	I1227 20:29:22.959096  349640 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:29:22.959101  349640 fix.go:54] fixHost starting: 
	I1227 20:29:22.959287  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:22.976170  349640 fix.go:112] recreateIfNeeded on newest-cni-307728: state=Stopped err=<nil>
	W1227 20:29:22.976196  349640 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 20:29:23.141106  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:25.640346  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:27.641661  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:22.977899  349640 out.go:252] * Restarting existing docker container for "newest-cni-307728" ...
	I1227 20:29:22.977965  349640 cli_runner.go:164] Run: docker start newest-cni-307728
	I1227 20:29:23.209602  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:23.228965  349640 kic.go:430] container "newest-cni-307728" state is running.
	I1227 20:29:23.229357  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:23.247657  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:23.247952  349640 machine.go:94] provisionDockerMachine start ...
	I1227 20:29:23.248040  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:23.266559  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:23.266854  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:23.266871  349640 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:29:23.267586  349640 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40356->127.0.0.1:33133: read: connection reset by peer
	I1227 20:29:26.389693  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.389724  349640 ubuntu.go:182] provisioning hostname "newest-cni-307728"
	I1227 20:29:26.389772  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.407725  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.407964  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.407977  349640 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-307728 && echo "newest-cni-307728" | sudo tee /etc/hostname
	I1227 20:29:26.537069  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.537154  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.554605  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.554823  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.554839  349640 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307728/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:29:26.675284  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:29:26.675315  349640 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:29:26.675364  349640 ubuntu.go:190] setting up certificates
	I1227 20:29:26.675387  349640 provision.go:84] configureAuth start
	I1227 20:29:26.675446  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:26.693637  349640 provision.go:143] copyHostCerts
	I1227 20:29:26.693688  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:29:26.693704  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:29:26.693768  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:29:26.693867  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:29:26.693885  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:29:26.693934  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:29:26.694025  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:29:26.694034  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:29:26.694061  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:29:26.694130  349640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-307728 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-307728]
	I1227 20:29:26.867266  349640 provision.go:177] copyRemoteCerts
	I1227 20:29:26.867338  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:29:26.867388  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.885478  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:26.980147  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:29:26.999076  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:29:27.017075  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:29:27.035083  349640 provision.go:87] duration metric: took 359.672918ms to configureAuth
	I1227 20:29:27.035111  349640 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:29:27.035327  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:27.035447  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.052793  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:27.053075  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:27.053104  349640 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:29:27.343702  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:29:27.343727  349640 machine.go:97] duration metric: took 4.095755604s to provisionDockerMachine
	I1227 20:29:27.343741  349640 start.go:293] postStartSetup for "newest-cni-307728" (driver="docker")
	I1227 20:29:27.343754  349640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:29:27.343815  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:29:27.343863  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.367256  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.461046  349640 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:29:27.464376  349640 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:29:27.464409  349640 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:29:27.464430  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:29:27.464483  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:29:27.464567  349640 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:29:27.464649  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:29:27.471953  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:29:27.488345  349640 start.go:296] duration metric: took 144.591413ms for postStartSetup
	I1227 20:29:27.488403  349640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:29:27.488434  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.506383  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.597986  349640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:29:27.602558  349640 fix.go:56] duration metric: took 4.64345174s for fixHost
	I1227 20:29:27.602585  349640 start.go:83] releasing machines lock for "newest-cni-307728", held for 4.643494258s
	I1227 20:29:27.602644  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:27.623164  349640 ssh_runner.go:195] Run: cat /version.json
	I1227 20:29:27.623225  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.623311  349640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:29:27.623401  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.644318  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.644706  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.735874  349640 ssh_runner.go:195] Run: systemctl --version
	I1227 20:29:27.796779  349640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:29:27.836209  349640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:29:27.841396  349640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:29:27.841458  349640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:29:27.849842  349640 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:29:27.849864  349640 start.go:496] detecting cgroup driver to use...
	I1227 20:29:27.849891  349640 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:29:27.850059  349640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:29:27.863872  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:29:27.876702  349640 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:29:27.876753  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:29:27.890649  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:29:27.903058  349640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:29:27.992790  349640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:29:28.078394  349640 docker.go:234] disabling docker service ...
	I1227 20:29:28.078471  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:29:28.093111  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:29:28.105866  349640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:29:28.195542  349640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:29:28.278015  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:29:28.291348  349640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:29:28.305334  349640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:29:28.305405  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.314550  349640 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:29:28.314619  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.324597  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.334691  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.346435  349640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:29:28.356445  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.366534  349640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.375089  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.384484  349640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:29:28.392136  349640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:29:28.399804  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:28.488345  349640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:29:28.627177  349640 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:29:28.627250  349640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:29:28.631981  349640 start.go:574] Will wait 60s for crictl version
	I1227 20:29:28.632034  349640 ssh_runner.go:195] Run: which crictl
	I1227 20:29:28.635757  349640 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:29:28.661999  349640 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:29:28.662074  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.692995  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.727086  349640 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:29:28.728112  349640 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:29:28.747478  349640 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 20:29:28.752558  349640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:29:28.764745  349640 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Dec 27 20:28:57 embed-certs-820583 crio[569]: time="2025-12-27T20:28:57.137096647Z" level=info msg="Started container" PID=1774 containerID=90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper id=bfa18776-bd97-4573-869b-da66eeca983a name=/runtime.v1.RuntimeService/StartContainer sandboxID=6910c9f4b950710f274f042c84116322c028121d818ee759f6327837a88c5962
	Dec 27 20:28:57 embed-certs-820583 crio[569]: time="2025-12-27T20:28:57.180907995Z" level=info msg="Removing container: b15a00182e38573708f6027521c5733bc87e7d5fe16f2b0facb8ad5d8a552fbf" id=84c93cf9-451a-4025-ad22-2bc939786d21 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:28:57 embed-certs-820583 crio[569]: time="2025-12-27T20:28:57.193506499Z" level=info msg="Removed container b15a00182e38573708f6027521c5733bc87e7d5fe16f2b0facb8ad5d8a552fbf: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper" id=84c93cf9-451a-4025-ad22-2bc939786d21 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.208050525Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=15fd98cd-722e-46e7-8790-c0e6b3ef99c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.209009644Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bab4fab5-444e-4c46-be5c-5c019227cb8f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.210357845Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b7c63a0a-8c8e-431f-bd26-5da8af0d66f5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.210492233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.216118944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.216302671Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1abaff484001dcfcde82469159e075d4bdbf64ae7d8d1db0623ae52af2c9c236/merged/etc/passwd: no such file or directory"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.216333898Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1abaff484001dcfcde82469159e075d4bdbf64ae7d8d1db0623ae52af2c9c236/merged/etc/group: no such file or directory"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.216887677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.236726162Z" level=info msg="Created container 756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77: kube-system/storage-provisioner/storage-provisioner" id=b7c63a0a-8c8e-431f-bd26-5da8af0d66f5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.237484167Z" level=info msg="Starting container: 756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77" id=f9f428ce-4b1a-4755-b85f-344ae11afb53 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:07 embed-certs-820583 crio[569]: time="2025-12-27T20:29:07.239868733Z" level=info msg="Started container" PID=1789 containerID=756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77 description=kube-system/storage-provisioner/storage-provisioner id=f9f428ce-4b1a-4755-b85f-344ae11afb53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=839c645e5467f44de4a2d575b7ce4088dc8d55bed98c43cf204ada7a51e30f73
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.092514047Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9d5a3b18-d3df-4b41-8888-24b63b878112 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.093569106Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6fc3e515-9a3b-4a4b-a60d-b0ed1f5ecb95 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.094547684Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper" id=5d47e4be-6d4f-4fb4-ab36-e21ef8235cd9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.094696279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.101364414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.101795508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.130407181Z" level=info msg="Created container 2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper" id=5d47e4be-6d4f-4fb4-ab36-e21ef8235cd9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.130952011Z" level=info msg="Starting container: 2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf" id=ef4fcaab-07aa-46cf-9f70-85082974c553 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.13257012Z" level=info msg="Started container" PID=1829 containerID=2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper id=ef4fcaab-07aa-46cf-9f70-85082974c553 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6910c9f4b950710f274f042c84116322c028121d818ee759f6327837a88c5962
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.255570453Z" level=info msg="Removing container: 90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2" id=41222281-cf94-4d16-8ba6-dd456550dbab name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:24 embed-certs-820583 crio[569]: time="2025-12-27T20:29:24.264612994Z" level=info msg="Removed container 90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd/dashboard-metrics-scraper" id=41222281-cf94-4d16-8ba6-dd456550dbab name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2d629d4ced2ba       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   6910c9f4b9507       dashboard-metrics-scraper-867fb5f87b-lw2jd   kubernetes-dashboard
	756c2ccbc5820       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   839c645e5467f       storage-provisioner                          kube-system
	9a9fce26c1c18       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   60129d86029cc       kubernetes-dashboard-b84665fb8-2hqqv         kubernetes-dashboard
	db389266c3bcb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   33ef4830dce7e       busybox                                      default
	4b55dea85cd25       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           53 seconds ago      Running             coredns                     0                   ddae4493d8d4f       coredns-7d764666f9-nvnjg                     kube-system
	7b54f2b9658d2       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           53 seconds ago      Running             kube-proxy                  0                   3643b6ad585a6       kube-proxy-srwxn                             kube-system
	c63598d001697       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   839c645e5467f       storage-provisioner                          kube-system
	7e8536f5d1391       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           53 seconds ago      Running             kindnet-cni                 0                   0ecee6153100d       kindnet-6d59t                                kube-system
	e321975654358       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           55 seconds ago      Running             etcd                        0                   a9b3b47106e24       etcd-embed-certs-820583                      kube-system
	c920e23ef4389       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           55 seconds ago      Running             kube-controller-manager     0                   5f542d34d6914       kube-controller-manager-embed-certs-820583   kube-system
	7d0b7a7e858d7       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           55 seconds ago      Running             kube-apiserver              0                   da4ba350f1424       kube-apiserver-embed-certs-820583            kube-system
	383462ccad151       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           55 seconds ago      Running             kube-scheduler              0                   05d9346fd0a42       kube-scheduler-embed-certs-820583            kube-system
	
	
	==> coredns [4b55dea85cd2565470f3247490decce7f2b26de31e75677c7df7a52888274a5b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44089 - 28196 "HINFO IN 4160130662824686331.2662000661116471141. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.093595256s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-820583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-820583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=embed-certs-820583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_27_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:27:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-820583
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:29:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:29:06 +0000   Sat, 27 Dec 2025 20:27:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:29:06 +0000   Sat, 27 Dec 2025 20:27:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:29:06 +0000   Sat, 27 Dec 2025 20:27:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:29:06 +0000   Sat, 27 Dec 2025 20:27:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-820583
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                41c5c9fb-06be-4108-9630-9ada526cc117
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-nvnjg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-820583                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-6d59t                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-820583             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-820583    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-srwxn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-820583             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-lw2jd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-2hqqv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node embed-certs-820583 event: Registered Node embed-certs-820583 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node embed-certs-820583 event: Registered Node embed-certs-820583 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [e3219756543583e2bc1bc017b95edaeb3245180fdb7e19b51f610a80a7bdaf8f] <==
	{"level":"info","ts":"2025-12-27T20:28:34.659862Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:28:34.659902Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:28:34.659983Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:28:34.660083Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:28:34.661370Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:28:35.151245Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:35.151291Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:35.151365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:35.151391Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:35.151407Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:35.152093Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:35.152118Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:35.152135Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:35.152146Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:35.153124Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-820583 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:28:35.153145Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:35.153160Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:35.153364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:35.153379Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:35.154300Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:35.154443Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:35.157884Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:28:35.158025Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:28:52.831596Z","caller":"traceutil/trace.go:172","msg":"trace[498205144] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"138.54048ms","start":"2025-12-27T20:28:52.693035Z","end":"2025-12-27T20:28:52.831575Z","steps":["trace[498205144] 'process raft request'  (duration: 138.384926ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:28:52.974932Z","caller":"traceutil/trace.go:172","msg":"trace[304725984] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"133.539584ms","start":"2025-12-27T20:28:52.841356Z","end":"2025-12-27T20:28:52.974895Z","steps":["trace[304725984] 'process raft request'  (duration: 133.423195ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:29:30 up  1:11,  0 user,  load average: 2.78, 3.08, 2.26
	Linux embed-certs-820583 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e8536f5d1391b16110203289a4355f73f4506be70a40d4c701bec3c60c025b6] <==
	I1227 20:28:36.719055       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:28:36.719324       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:28:36.719467       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:28:36.719487       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:28:36.719508       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:28:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:28:36.825427       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:28:36.826247       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:28:36.826257       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:28:36.826456       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:28:37.179633       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:28:37.179668       1 metrics.go:72] Registering metrics
	I1227 20:28:37.179753       1 controller.go:711] "Syncing nftables rules"
	I1227 20:28:46.826149       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:28:46.826217       1 main.go:301] handling current node
	I1227 20:28:56.829446       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:28:56.829491       1 main.go:301] handling current node
	I1227 20:29:06.825653       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:29:06.825690       1 main.go:301] handling current node
	I1227 20:29:16.825743       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:29:16.825800       1 main.go:301] handling current node
	I1227 20:29:26.827052       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:29:26.827085       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7d0b7a7e858d75dabf988a9b76fa95b39147d055357b9b794f4486154f20ba5a] <==
	I1227 20:28:36.079867       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:36.079900       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:28:36.079958       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:28:36.079906       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:28:36.080369       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:28:36.080413       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:28:36.080432       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:28:36.080438       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:28:36.080443       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:28:36.080411       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1227 20:28:36.085419       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:28:36.086602       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:28:36.088964       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:28:36.112216       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:28:36.112468       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:28:36.354700       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:28:36.381739       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:28:36.408189       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:28:36.420820       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:28:36.491749       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.10.44"}
	I1227 20:28:36.504296       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.203.144"}
	I1227 20:28:36.983224       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:28:39.654536       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:28:39.801867       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:28:39.904849       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c920e23ef438912ca2e52bd7890591042bc74ed5ef30728cbbd1281035c27d3e] <==
	I1227 20:28:39.214524       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.214643       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.214817       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.215002       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.215161       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.215596       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.216006       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.217007       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.218497       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.218814       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.219067       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:28:39.219231       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-820583"
	I1227 20:28:39.219371       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:28:39.220884       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.220895       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221006       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221720       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221732       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221835       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221850       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:28:39.221856       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:28:39.221720       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.221973       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.224035       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:39.311980       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [7b54f2b9658d2fdabee13ddc7f55fc68dce492b532bd8eaf9c2f7464327a49f2] <==
	I1227 20:28:36.507863       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:28:36.579344       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:36.680389       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:36.680431       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:28:36.680568       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:28:36.699583       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:28:36.699643       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:28:36.704593       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:28:36.704987       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:28:36.705004       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:36.706270       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:28:36.706299       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:28:36.706385       1 config.go:309] "Starting node config controller"
	I1227 20:28:36.706400       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:28:36.706408       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:28:36.706407       1 config.go:200] "Starting service config controller"
	I1227 20:28:36.706418       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:28:36.706451       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:28:36.706474       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:28:36.806581       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:28:36.806607       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:28:36.806609       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [383462ccad1518d79addd7cf9399f63bb733b06b0d9e1d6abe521c276e668b92] <==
	I1227 20:28:34.788337       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:28:35.999577       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:28:35.999651       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:28:35.999666       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:28:35.999676       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:28:36.030168       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:28:36.030264       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:36.033477       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:28:36.033519       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:36.034307       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:28:36.034408       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:28:36.135772       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:28:50 embed-certs-820583 kubelet[737]: E1227 20:28:50.159744     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-820583" containerName="kube-apiserver"
	Dec 27 20:28:50 embed-certs-820583 kubelet[737]: E1227 20:28:50.159868     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-820583" containerName="kube-scheduler"
	Dec 27 20:28:52 embed-certs-820583 kubelet[737]: E1227 20:28:52.684021     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-820583" containerName="kube-controller-manager"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: E1227 20:28:57.089671     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: I1227 20:28:57.089723     737 scope.go:122] "RemoveContainer" containerID="b15a00182e38573708f6027521c5733bc87e7d5fe16f2b0facb8ad5d8a552fbf"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: I1227 20:28:57.179397     737 scope.go:122] "RemoveContainer" containerID="b15a00182e38573708f6027521c5733bc87e7d5fe16f2b0facb8ad5d8a552fbf"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: E1227 20:28:57.179640     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: I1227 20:28:57.179674     737 scope.go:122] "RemoveContainer" containerID="90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2"
	Dec 27 20:28:57 embed-certs-820583 kubelet[737]: E1227 20:28:57.179862     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lw2jd_kubernetes-dashboard(59d06d56-971b-4ead-ae8b-d6ad7c1db340)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" podUID="59d06d56-971b-4ead-ae8b-d6ad7c1db340"
	Dec 27 20:28:58 embed-certs-820583 kubelet[737]: E1227 20:28:58.183309     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:28:58 embed-certs-820583 kubelet[737]: I1227 20:28:58.183349     737 scope.go:122] "RemoveContainer" containerID="90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2"
	Dec 27 20:28:58 embed-certs-820583 kubelet[737]: E1227 20:28:58.183540     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lw2jd_kubernetes-dashboard(59d06d56-971b-4ead-ae8b-d6ad7c1db340)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" podUID="59d06d56-971b-4ead-ae8b-d6ad7c1db340"
	Dec 27 20:29:07 embed-certs-820583 kubelet[737]: I1227 20:29:07.207543     737 scope.go:122] "RemoveContainer" containerID="c63598d0016972749ece5b7864b1d64b6047dab84a3a4e96f0e215bfa0652ee1"
	Dec 27 20:29:11 embed-certs-820583 kubelet[737]: E1227 20:29:11.231730     737 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nvnjg" containerName="coredns"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: E1227 20:29:24.091255     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: I1227 20:29:24.091315     737 scope.go:122] "RemoveContainer" containerID="90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: I1227 20:29:24.254352     737 scope.go:122] "RemoveContainer" containerID="90c1809ba241a50e350c47b9df846d11782f0949cc835a82ca985f0d21bf4fb2"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: E1227 20:29:24.254596     737 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: I1227 20:29:24.254623     737 scope.go:122] "RemoveContainer" containerID="2d629d4ced2ba4d8f39349dc81f17c47b3b4f325690155f7be3c02e87bdd3cdf"
	Dec 27 20:29:24 embed-certs-820583 kubelet[737]: E1227 20:29:24.254819     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-lw2jd_kubernetes-dashboard(59d06d56-971b-4ead-ae8b-d6ad7c1db340)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-lw2jd" podUID="59d06d56-971b-4ead-ae8b-d6ad7c1db340"
	Dec 27 20:29:25 embed-certs-820583 kubelet[737]: I1227 20:29:25.186524     737 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 20:29:25 embed-certs-820583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:29:25 embed-certs-820583 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:29:25 embed-certs-820583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:29:25 embed-certs-820583 systemd[1]: kubelet.service: Consumed 1.660s CPU time.
	
	
	==> kubernetes-dashboard [9a9fce26c1c18179e0c6750a04cb5c5c3f21bedaad9787d097befb5daf933a74] <==
	2025/12/27 20:28:43 Using namespace: kubernetes-dashboard
	2025/12/27 20:28:43 Using in-cluster config to connect to apiserver
	2025/12/27 20:28:43 Using secret token for csrf signing
	2025/12/27 20:28:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:28:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:28:43 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:28:43 Generating JWE encryption key
	2025/12/27 20:28:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:28:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:28:43 Initializing JWE encryption key from synchronized object
	2025/12/27 20:28:43 Creating in-cluster Sidecar client
	2025/12/27 20:28:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:43 Serving insecurely on HTTP port: 9090
	2025/12/27 20:29:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:28:43 Starting overwatch
	
	
	==> storage-provisioner [756c2ccbc582095c3fbe9d8c0bc622ac40937ce71515d86f9c0d512f8b632a77] <==
	I1227 20:29:07.253679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:29:07.263044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:29:07.263090       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:29:07.265554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:10.721741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:14.982720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:18.580764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:21.634998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:24.657493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:24.661509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:29:24.661670       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:29:24.661821       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-820583_f5665e07-d2c2-4240-ab64-b21ccab48bbe!
	I1227 20:29:24.661887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6894b207-1c50-480d-809b-b77065e433a4", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-820583_f5665e07-d2c2-4240-ab64-b21ccab48bbe became leader
	W1227 20:29:24.664299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:24.667318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:29:24.762094       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-820583_f5665e07-d2c2-4240-ab64-b21ccab48bbe!
	W1227 20:29:26.671115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:26.676154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:28.679473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:28.683902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c63598d0016972749ece5b7864b1d64b6047dab84a3a4e96f0e215bfa0652ee1] <==
	I1227 20:28:36.466004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:29:06.471569       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-820583 -n embed-certs-820583
E1227 20:29:30.653857   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-820583 -n embed-certs-820583: exit status 2 (356.159524ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-820583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-307728 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-307728 --alsologtostderr -v=1: exit status 80 (1.50502549s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-307728 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:29:33.309493  353602 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:33.309775  353602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:33.309786  353602 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:33.309790  353602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:33.310023  353602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:33.310308  353602 out.go:368] Setting JSON to false
	I1227 20:29:33.310327  353602 mustload.go:66] Loading cluster: newest-cni-307728
	I1227 20:29:33.310733  353602 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:33.311144  353602 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:33.329334  353602 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:33.329591  353602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:33.389696  353602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:66 SystemTime:2025-12-27 20:29:33.380008079 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:33.390369  353602 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-307728 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:29:33.397353  353602 out.go:179] * Pausing node newest-cni-307728 ... 
	I1227 20:29:33.398822  353602 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:33.399149  353602 ssh_runner.go:195] Run: systemctl --version
	I1227 20:29:33.399195  353602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:33.417765  353602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:33.508281  353602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:33.520978  353602 pause.go:52] kubelet running: true
	I1227 20:29:33.521030  353602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:33.649716  353602 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:33.649791  353602 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:33.720069  353602 cri.go:96] found id: "d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492"
	I1227 20:29:33.720095  353602 cri.go:96] found id: "2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536"
	I1227 20:29:33.720102  353602 cri.go:96] found id: "2c126392630ac0e9cc664d34784d1f3d6649b49a4944fb5ff018e652894a5f6c"
	I1227 20:29:33.720107  353602 cri.go:96] found id: "2468df267da64ce7026d23c356156eccc868b96989c5479a2b5dfe05bcf8f0dc"
	I1227 20:29:33.720112  353602 cri.go:96] found id: "5ae0e51100b8fd95d4ea6bcc9d74ec6251f93fbcceeee0b2dd3cee32dada6c31"
	I1227 20:29:33.720117  353602 cri.go:96] found id: "413cb76a28516298082c7dbccd5e654b39bcb99ebcb7579b812852e885f9f50f"
	I1227 20:29:33.720121  353602 cri.go:96] found id: ""
	I1227 20:29:33.720182  353602 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:33.732485  353602 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:33Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:29:33.864817  353602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:33.877103  353602 pause.go:52] kubelet running: false
	I1227 20:29:33.877160  353602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:33.989712  353602 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:33.989789  353602 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:34.053329  353602 cri.go:96] found id: "d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492"
	I1227 20:29:34.053363  353602 cri.go:96] found id: "2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536"
	I1227 20:29:34.053371  353602 cri.go:96] found id: "2c126392630ac0e9cc664d34784d1f3d6649b49a4944fb5ff018e652894a5f6c"
	I1227 20:29:34.053378  353602 cri.go:96] found id: "2468df267da64ce7026d23c356156eccc868b96989c5479a2b5dfe05bcf8f0dc"
	I1227 20:29:34.053383  353602 cri.go:96] found id: "5ae0e51100b8fd95d4ea6bcc9d74ec6251f93fbcceeee0b2dd3cee32dada6c31"
	I1227 20:29:34.053390  353602 cri.go:96] found id: "413cb76a28516298082c7dbccd5e654b39bcb99ebcb7579b812852e885f9f50f"
	I1227 20:29:34.053396  353602 cri.go:96] found id: ""
	I1227 20:29:34.053443  353602 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:34.530681  353602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:34.543489  353602 pause.go:52] kubelet running: false
	I1227 20:29:34.543544  353602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:34.673728  353602 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:34.673814  353602 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:34.739031  353602 cri.go:96] found id: "d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492"
	I1227 20:29:34.739060  353602 cri.go:96] found id: "2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536"
	I1227 20:29:34.739065  353602 cri.go:96] found id: "2c126392630ac0e9cc664d34784d1f3d6649b49a4944fb5ff018e652894a5f6c"
	I1227 20:29:34.739071  353602 cri.go:96] found id: "2468df267da64ce7026d23c356156eccc868b96989c5479a2b5dfe05bcf8f0dc"
	I1227 20:29:34.739076  353602 cri.go:96] found id: "5ae0e51100b8fd95d4ea6bcc9d74ec6251f93fbcceeee0b2dd3cee32dada6c31"
	I1227 20:29:34.739081  353602 cri.go:96] found id: "413cb76a28516298082c7dbccd5e654b39bcb99ebcb7579b812852e885f9f50f"
	I1227 20:29:34.739086  353602 cri.go:96] found id: ""
	I1227 20:29:34.739126  353602 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:34.752240  353602 out.go:203] 
	W1227 20:29:34.753287  353602 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:29:34.753301  353602 out.go:285] * 
	* 
	W1227 20:29:34.755269  353602 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:29:34.756375  353602 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-307728 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-307728
helpers_test.go:244: (dbg) docker inspect newest-cni-307728:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6",
	        "Created": "2025-12-27T20:28:53.126304312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 349845,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:29:23.001526813Z",
	            "FinishedAt": "2025-12-27T20:29:22.167982725Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/hosts",
	        "LogPath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6-json.log",
	        "Name": "/newest-cni-307728",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-307728:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-307728",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6",
	                "LowerDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-307728",
	                "Source": "/var/lib/docker/volumes/newest-cni-307728/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-307728",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-307728",
	                "name.minikube.sigs.k8s.io": "newest-cni-307728",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7f4e920b267727107fc7bd54e180c7eb54feb67041423a82d8d889ed57e4d9e6",
	            "SandboxKey": "/var/run/docker/netns/7f4e920b2677",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-307728": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d45b1129d8ffab2533043d5d1454842b3b9f2cbc16e12ecfd948c089f363538",
	                    "EndpointID": "c625d4a97c4ccd755e34c0ea68af4251c2b30231a0eb0995b385ddea0060cbb8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "aa:6a:1e:bf:fe:32",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-307728",
	                        "64c609a6122e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-307728 -n newest-cni-307728
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-307728 -n newest-cni-307728: exit status 2 (316.065648ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-307728 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-954154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ no-preload-014435 image list --format=json                                                                                                                                                                                                    │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p no-preload-014435 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-588477 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p newest-cni-307728 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p test-preload-dl-gcs-588477                                                                                                                                                                                                                 │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-github-805734 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-github-805734                                                                                                                                                                                                              │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-275955 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-275955                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-307728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ embed-certs-820583 image list --format=json                                                                                                                                                                                                   │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p embed-certs-820583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p embed-certs-820583                                                                                                                                                                                                                         │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ newest-cni-307728 image list --format=json                                                                                                                                                                                                    │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p newest-cni-307728 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p embed-certs-820583                                                                                                                                                                                                                         │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:29:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:29:22.784538  349640 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:22.784794  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.784803  349640 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:22.784808  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.785052  349640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:22.785520  349640 out.go:368] Setting JSON to false
	I1227 20:29:22.786562  349640 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4312,"bootTime":1766863051,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:29:22.786612  349640 start.go:143] virtualization: kvm guest
	I1227 20:29:22.788250  349640 out.go:179] * [newest-cni-307728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:29:22.789332  349640 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:29:22.789351  349640 notify.go:221] Checking for updates...
	I1227 20:29:22.791442  349640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:29:22.792602  349640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:22.793592  349640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:29:22.794578  349640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:29:22.795545  349640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:29:22.796871  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:22.797487  349640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:29:22.820540  349640 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:29:22.820686  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.876976  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.867077037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.877116  349640 docker.go:319] overlay module found
	I1227 20:29:22.878722  349640 out.go:179] * Using the docker driver based on existing profile
	I1227 20:29:22.879763  349640 start.go:309] selected driver: docker
	I1227 20:29:22.879776  349640 start.go:928] validating driver "docker" against &{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.879862  349640 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:29:22.880423  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.933111  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.923700326 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.933397  349640 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:22.933437  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:22.933495  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:22.933527  349640 start.go:353] cluster config:
	{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.935838  349640 out.go:179] * Starting "newest-cni-307728" primary control-plane node in "newest-cni-307728" cluster
	I1227 20:29:22.936870  349640 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:29:22.938035  349640 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:29:22.939178  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:22.939218  349640 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:29:22.939230  349640 cache.go:65] Caching tarball of preloaded images
	I1227 20:29:22.939273  349640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:29:22.939310  349640 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:29:22.939321  349640 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:29:22.939415  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:22.958953  349640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:29:22.958973  349640 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:29:22.958989  349640 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:29:22.959021  349640 start.go:360] acquireMachinesLock for newest-cni-307728: {Name:mk68119d6288f8bd1ffbe980f508592c691efe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:29:22.959080  349640 start.go:364] duration metric: took 32.398µs to acquireMachinesLock for "newest-cni-307728"
	I1227 20:29:22.959096  349640 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:29:22.959101  349640 fix.go:54] fixHost starting: 
	I1227 20:29:22.959287  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:22.976170  349640 fix.go:112] recreateIfNeeded on newest-cni-307728: state=Stopped err=<nil>
	W1227 20:29:22.976196  349640 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 20:29:23.141106  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:25.640346  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:27.641661  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:22.977899  349640 out.go:252] * Restarting existing docker container for "newest-cni-307728" ...
	I1227 20:29:22.977965  349640 cli_runner.go:164] Run: docker start newest-cni-307728
	I1227 20:29:23.209602  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:23.228965  349640 kic.go:430] container "newest-cni-307728" state is running.
	I1227 20:29:23.229357  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:23.247657  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:23.247952  349640 machine.go:94] provisionDockerMachine start ...
	I1227 20:29:23.248040  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:23.266559  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:23.266854  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:23.266871  349640 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:29:23.267586  349640 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40356->127.0.0.1:33133: read: connection reset by peer
	I1227 20:29:26.389693  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.389724  349640 ubuntu.go:182] provisioning hostname "newest-cni-307728"
	I1227 20:29:26.389772  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.407725  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.407964  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.407977  349640 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-307728 && echo "newest-cni-307728" | sudo tee /etc/hostname
	I1227 20:29:26.537069  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.537154  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.554605  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.554823  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.554839  349640 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307728/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:29:26.675284  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:29:26.675315  349640 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:29:26.675364  349640 ubuntu.go:190] setting up certificates
	I1227 20:29:26.675387  349640 provision.go:84] configureAuth start
	I1227 20:29:26.675446  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:26.693637  349640 provision.go:143] copyHostCerts
	I1227 20:29:26.693688  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:29:26.693704  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:29:26.693768  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:29:26.693867  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:29:26.693885  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:29:26.693934  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:29:26.694025  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:29:26.694034  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:29:26.694061  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:29:26.694130  349640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-307728 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-307728]
	I1227 20:29:26.867266  349640 provision.go:177] copyRemoteCerts
	I1227 20:29:26.867338  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:29:26.867388  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.885478  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:26.980147  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:29:26.999076  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:29:27.017075  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:29:27.035083  349640 provision.go:87] duration metric: took 359.672918ms to configureAuth
	I1227 20:29:27.035111  349640 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:29:27.035327  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:27.035447  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.052793  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:27.053075  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:27.053104  349640 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:29:27.343702  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:29:27.343727  349640 machine.go:97] duration metric: took 4.095755604s to provisionDockerMachine
	I1227 20:29:27.343741  349640 start.go:293] postStartSetup for "newest-cni-307728" (driver="docker")
	I1227 20:29:27.343754  349640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:29:27.343815  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:29:27.343863  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.367256  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.461046  349640 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:29:27.464376  349640 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:29:27.464409  349640 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:29:27.464430  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:29:27.464483  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:29:27.464567  349640 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:29:27.464649  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:29:27.471953  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:29:27.488345  349640 start.go:296] duration metric: took 144.591413ms for postStartSetup
	I1227 20:29:27.488403  349640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:29:27.488434  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.506383  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.597986  349640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:29:27.602558  349640 fix.go:56] duration metric: took 4.64345174s for fixHost
	I1227 20:29:27.602585  349640 start.go:83] releasing machines lock for "newest-cni-307728", held for 4.643494258s
	I1227 20:29:27.602644  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:27.623164  349640 ssh_runner.go:195] Run: cat /version.json
	I1227 20:29:27.623225  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.623311  349640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:29:27.623401  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.644318  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.644706  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.735874  349640 ssh_runner.go:195] Run: systemctl --version
	I1227 20:29:27.796779  349640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:29:27.836209  349640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:29:27.841396  349640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:29:27.841458  349640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:29:27.849842  349640 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:29:27.849864  349640 start.go:496] detecting cgroup driver to use...
	I1227 20:29:27.849891  349640 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:29:27.850059  349640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:29:27.863872  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:29:27.876702  349640 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:29:27.876753  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:29:27.890649  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:29:27.903058  349640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:29:27.992790  349640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:29:28.078394  349640 docker.go:234] disabling docker service ...
	I1227 20:29:28.078471  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:29:28.093111  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:29:28.105866  349640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:29:28.195542  349640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:29:28.278015  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:29:28.291348  349640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:29:28.305334  349640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:29:28.305405  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.314550  349640 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:29:28.314619  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.324597  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.334691  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.346435  349640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:29:28.356445  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.366534  349640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.375089  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.384484  349640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:29:28.392136  349640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:29:28.399804  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:28.488345  349640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:29:28.627177  349640 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:29:28.627250  349640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:29:28.631981  349640 start.go:574] Will wait 60s for crictl version
	I1227 20:29:28.632034  349640 ssh_runner.go:195] Run: which crictl
	I1227 20:29:28.635757  349640 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:29:28.661999  349640 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:29:28.662074  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.692995  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.727086  349640 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:29:28.728112  349640 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:29:28.747478  349640 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 20:29:28.752558  349640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:29:28.764745  349640 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:29:28.765905  349640 kubeadm.go:884] updating cluster {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:29:28.766060  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:28.766106  349640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:29:28.806106  349640 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:29:28.806131  349640 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:29:28.806184  349640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:29:28.834446  349640 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:29:28.834465  349640 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:29:28.834473  349640 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I1227 20:29:28.834603  349640 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-307728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:29:28.834686  349640 ssh_runner.go:195] Run: crio config
	I1227 20:29:28.888266  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:28.888297  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:28.888314  349640 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:29:28.888343  349640 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-307728 NodeName:newest-cni-307728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:29:28.888514  349640 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-307728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:29:28.888582  349640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:29:28.896598  349640 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:29:28.896658  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:29:28.904048  349640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:29:28.916029  349640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:29:28.928184  349640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:29:28.940621  349640 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:29:28.944032  349640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:29:28.953826  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:29.049430  349640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:29:29.069168  349640 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728 for IP: 192.168.103.2
	I1227 20:29:29.069184  349640 certs.go:195] generating shared ca certs ...
	I1227 20:29:29.069197  349640 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.069335  349640 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:29:29.069415  349640 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:29:29.069430  349640 certs.go:257] generating profile certs ...
	I1227 20:29:29.069535  349640 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key
	I1227 20:29:29.069615  349640 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df
	I1227 20:29:29.069674  349640 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key
	I1227 20:29:29.069814  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:29:29.069857  349640 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:29:29.069870  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:29:29.069905  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:29:29.069966  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:29:29.070003  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:29:29.070061  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:29:29.070605  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:29:29.089009  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:29:29.112171  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:29:29.134212  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:29:29.158133  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:29:29.181988  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:29:29.200678  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:29:29.218007  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:29:29.235685  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:29:29.255652  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:29:29.274505  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:29:29.294020  349640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:29:29.308273  349640 ssh_runner.go:195] Run: openssl version
	I1227 20:29:29.314351  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.321706  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:29:29.329192  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.332801  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.332846  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.370829  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:29:29.378564  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.386204  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:29:29.393976  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.397479  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.397525  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.433631  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:29:29.440987  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.449024  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:29:29.457943  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.461620  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.461665  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.499185  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:29:29.506551  349640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:29:29.510965  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:29:29.551280  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:29:29.589754  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:29:29.641048  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:29:29.698248  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:29:29.757405  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:29:29.803735  349640 kubeadm.go:401] StartCluster: {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:29.803836  349640 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:29:29.803901  349640 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:29:29.835928  349640 cri.go:96] found id: "2c126392630ac0e9cc664d34784d1f3d6649b49a4944fb5ff018e652894a5f6c"
	I1227 20:29:29.835952  349640 cri.go:96] found id: "2468df267da64ce7026d23c356156eccc868b96989c5479a2b5dfe05bcf8f0dc"
	I1227 20:29:29.835959  349640 cri.go:96] found id: "5ae0e51100b8fd95d4ea6bcc9d74ec6251f93fbcceeee0b2dd3cee32dada6c31"
	I1227 20:29:29.835967  349640 cri.go:96] found id: "413cb76a28516298082c7dbccd5e654b39bcb99ebcb7579b812852e885f9f50f"
	I1227 20:29:29.835971  349640 cri.go:96] found id: ""
	I1227 20:29:29.836012  349640 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:29:29.848165  349640 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:29Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:29:29.848217  349640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:29:29.857470  349640 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:29:29.857490  349640 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:29:29.857540  349640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:29:29.865790  349640 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:29:29.866736  349640 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-307728" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:29.867255  349640 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-307728" cluster setting kubeconfig missing "newest-cni-307728" context setting]
	I1227 20:29:29.867965  349640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.869656  349640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:29:29.877605  349640 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1227 20:29:29.877634  349640 kubeadm.go:602] duration metric: took 20.137461ms to restartPrimaryControlPlane
	I1227 20:29:29.877651  349640 kubeadm.go:403] duration metric: took 73.916779ms to StartCluster
	I1227 20:29:29.877669  349640 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.877726  349640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:29.879534  349640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.879773  349640 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:29:29.880023  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:29.880084  349640 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:29:29.880164  349640 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-307728"
	I1227 20:29:29.880179  349640 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-307728"
	W1227 20:29:29.880192  349640 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:29:29.880216  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.880295  349640 addons.go:70] Setting dashboard=true in profile "newest-cni-307728"
	I1227 20:29:29.880319  349640 addons.go:70] Setting default-storageclass=true in profile "newest-cni-307728"
	I1227 20:29:29.880353  349640 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-307728"
	I1227 20:29:29.880324  349640 addons.go:239] Setting addon dashboard=true in "newest-cni-307728"
	W1227 20:29:29.880433  349640 addons.go:248] addon dashboard should already be in state true
	I1227 20:29:29.880462  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.880671  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.880672  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.880907  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.885068  349640 out.go:179] * Verifying Kubernetes components...
	I1227 20:29:29.888082  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:29.906427  349640 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:29:29.906423  349640 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:29:29.906727  349640 addons.go:239] Setting addon default-storageclass=true in "newest-cni-307728"
	W1227 20:29:29.906749  349640 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:29:29.906798  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.907308  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.908502  349640 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:29.908563  349640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:29:29.908620  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.909594  349640 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:29:29.910726  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:29:29.910750  349640 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:29:29.910812  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.939150  349640 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:29.939175  349640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:29:29.939233  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.940432  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:29.944922  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:29.977263  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:30.045064  349640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:29:30.058167  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:29:30.058191  349640 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:29:30.061437  349640 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:29:30.061487  349640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:29:30.073175  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:30.076392  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:29:30.076416  349640 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:29:30.079332  349640 api_server.go:72] duration metric: took 199.523544ms to wait for apiserver process to appear ...
	I1227 20:29:30.079356  349640 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:29:30.079373  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:30.090441  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:30.094269  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:29:30.094291  349640 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:29:30.114023  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:29:30.114046  349640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:29:30.131494  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:29:30.131515  349640 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:29:30.149541  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:29:30.149615  349640 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:29:30.167283  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:29:30.167310  349640 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:29:30.184004  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:29:30.184024  349640 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:29:30.201013  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:29:30.201038  349640 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:29:30.217298  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:29:30.994230  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:30.994265  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:30.994280  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.078728  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:31.078755  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:31.079882  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.090296  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:31.090325  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:31.580397  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.585748  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:29:31.585801  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:29:31.606183  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.532971297s)
	I1227 20:29:31.606241  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.515771231s)
	I1227 20:29:31.606358  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.389020153s)
	I1227 20:29:31.607861  349640 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-307728 addons enable metrics-server
	
	I1227 20:29:31.616813  349640 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:29:31.617978  349640 addons.go:530] duration metric: took 1.737896941s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:29:32.080229  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:32.084464  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:29:32.084506  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:29:32.580069  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:32.584664  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1227 20:29:32.585710  349640 api_server.go:141] control plane version: v1.35.0
	I1227 20:29:32.585733  349640 api_server.go:131] duration metric: took 2.506370541s to wait for apiserver health ...
	I1227 20:29:32.585741  349640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:29:32.588682  349640 system_pods.go:59] 8 kube-system pods found
	I1227 20:29:32.588707  349640 system_pods.go:61] "coredns-7d764666f9-v4xtw" [54b9ffbd-579b-483a-aa05-a65988e43aae] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:29:32.588718  349640 system_pods.go:61] "etcd-newest-cni-307728" [47c59b02-ea05-4deb-a2d5-f33fe18e738b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:29:32.588742  349640 system_pods.go:61] "kindnet-6z4tn" [93ba591e-f91b-4d17-bc19-0df196548fdd] Running
	I1227 20:29:32.588751  349640 system_pods.go:61] "kube-apiserver-newest-cni-307728" [ff05d4da-e496-4611-90a2-32a9e49a76a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:29:32.588759  349640 system_pods.go:61] "kube-controller-manager-newest-cni-307728" [98a6898f-bd6c-4bb5-97eb-767920c25375] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:29:32.588772  349640 system_pods.go:61] "kube-proxy-9qccb" [7af7999b-ede9-4da5-8e6f-df77472e1cdd] Running
	I1227 20:29:32.588778  349640 system_pods.go:61] "kube-scheduler-newest-cni-307728" [cac454d9-fa90-45da-b22c-5d0e23dc78a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:29:32.588785  349640 system_pods.go:61] "storage-provisioner" [b4c1fa65-07d5-4f68-a68b-43acd8569dd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:29:32.588790  349640 system_pods.go:74] duration metric: took 3.044295ms to wait for pod list to return data ...
	I1227 20:29:32.588801  349640 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:29:32.591068  349640 default_sa.go:45] found service account: "default"
	I1227 20:29:32.591087  349640 default_sa.go:55] duration metric: took 2.281836ms for default service account to be created ...
	I1227 20:29:32.591100  349640 kubeadm.go:587] duration metric: took 2.711295065s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:32.591132  349640 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:29:32.592996  349640 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:29:32.593016  349640 node_conditions.go:123] node cpu capacity is 8
	I1227 20:29:32.593030  349640 node_conditions.go:105] duration metric: took 1.888982ms to run NodePressure ...
	I1227 20:29:32.593046  349640 start.go:242] waiting for startup goroutines ...
	I1227 20:29:32.593062  349640 start.go:247] waiting for cluster config update ...
	I1227 20:29:32.593076  349640 start.go:256] writing updated cluster config ...
	I1227 20:29:32.593351  349640 ssh_runner.go:195] Run: rm -f paused
	I1227 20:29:32.641222  349640 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	W1227 20:29:30.142107  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:32.640365  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:32.643768  349640 out.go:179] * Done! kubectl is now configured to use "newest-cni-307728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.462410651Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.466124153Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bb488b54-d163-459b-b048-187392d9e293 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.466579302Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1b125151-53d7-40f4-83d7-405a24653846 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.468372335Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.469072689Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.469289378Z" level=info msg="Ran pod sandbox 7a115c67fcc26f8b9b7c0d3c4ff39ecd2d77c963dba308c2607612953d781d75 with infra container: kube-system/kindnet-6z4tn/POD" id=1b125151-53d7-40f4-83d7-405a24653846 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.470058751Z" level=info msg="Ran pod sandbox 2518a11b6026399f4e6fce6259a9973c4e8308aa3a28a3a40e5240ca503455c9 with infra container: kube-system/kube-proxy-9qccb/POD" id=bb488b54-d163-459b-b048-187392d9e293 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.470527031Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=582a6dcc-d254-4402-9906-434870531e9a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.471063805Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=a1ced8a8-0b29-46fb-b8b2-6b40151a8fc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.471417615Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=a6c0130c-e72a-4286-85b7-f1e5f6c2fb35 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.47194Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=567961d1-164c-46f0-a12d-1437577d461d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.472476451Z" level=info msg="Creating container: kube-system/kindnet-6z4tn/kindnet-cni" id=26427658-74c2-44ef-a7c6-1ed6eaf3c482 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.472553273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.472809766Z" level=info msg="Creating container: kube-system/kube-proxy-9qccb/kube-proxy" id=e0addffd-f391-46b7-9376-e9a42c5d3bd4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.472905063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.476882425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.477450915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.481998454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.482457619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.504050766Z" level=info msg="Created container 2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536: kube-system/kindnet-6z4tn/kindnet-cni" id=26427658-74c2-44ef-a7c6-1ed6eaf3c482 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.505017694Z" level=info msg="Starting container: 2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536" id=4dbdf15f-772c-4bbb-bc71-8f30ed2b6d6a name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.507692086Z" level=info msg="Started container" PID=1059 containerID=2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536 description=kube-system/kindnet-6z4tn/kindnet-cni id=4dbdf15f-772c-4bbb-bc71-8f30ed2b6d6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a115c67fcc26f8b9b7c0d3c4ff39ecd2d77c963dba308c2607612953d781d75
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.511886997Z" level=info msg="Created container d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492: kube-system/kube-proxy-9qccb/kube-proxy" id=e0addffd-f391-46b7-9376-e9a42c5d3bd4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.512532592Z" level=info msg="Starting container: d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492" id=5f0f5ae7-423b-474b-8338-3ded8d138bca name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.51535273Z" level=info msg="Started container" PID=1060 containerID=d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492 description=kube-system/kube-proxy-9qccb/kube-proxy id=5f0f5ae7-423b-474b-8338-3ded8d138bca name=/runtime.v1.RuntimeService/StartContainer sandboxID=2518a11b6026399f4e6fce6259a9973c4e8308aa3a28a3a40e5240ca503455c9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d4304dcfa6b0c       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   4 seconds ago       Running             kube-proxy                1                   2518a11b60263       kube-proxy-9qccb                            kube-system
	2bb35280e51b0       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   4 seconds ago       Running             kindnet-cni               1                   7a115c67fcc26       kindnet-6z4tn                               kube-system
	2c126392630ac       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   6 seconds ago       Running             kube-scheduler            1                   dbe168cb7660b       kube-scheduler-newest-cni-307728            kube-system
	2468df267da64       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   6 seconds ago       Running             kube-controller-manager   1                   36cedc9f244ae       kube-controller-manager-newest-cni-307728   kube-system
	5ae0e51100b8f       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   6 seconds ago       Running             kube-apiserver            1                   69455ecafaf9d       kube-apiserver-newest-cni-307728            kube-system
	413cb76a28516       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   6 seconds ago       Running             etcd                      1                   44109c166e1f5       etcd-newest-cni-307728                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-307728
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-307728
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=newest-cni-307728
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_29_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:29:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-307728
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:29:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:29:31 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:29:31 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:29:31 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 20:29:31 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-307728
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                b8493783-f7be-4c30-8a0f-ec2eeceb6491
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-307728                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-6z4tn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-307728             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-307728    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-9qccb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-307728             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node newest-cni-307728 event: Registered Node newest-cni-307728 in Controller
	  Normal  RegisteredNode  1s    node-controller  Node newest-cni-307728 event: Registered Node newest-cni-307728 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [413cb76a28516298082c7dbccd5e654b39bcb99ebcb7579b812852e885f9f50f] <==
	{"level":"info","ts":"2025-12-27T20:29:29.759186Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:29:29.759267Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:29:29.759334Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:29:29.759702Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:29:29.759737Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:29:29.759693Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:29:29.759813Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:29:29.950139Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:29.950196Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:29.950274Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:29.950288Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:29:29.950307Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:29:29.952084Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T20:29:29.952719Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:29:29.953626Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:29:29.953677Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T20:29:29.959426Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:newest-cni-307728 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:29:29.959524Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:29:29.959549Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:29:29.960289Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:29:29.961996Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:29:29.961507Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:29:29.964905Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:29:29.965967Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:29:29.968121Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 20:29:35 up  1:12,  0 user,  load average: 2.80, 3.08, 2.26
	Linux newest-cni-307728 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536] <==
	I1227 20:29:31.679982       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:29:31.680265       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1227 20:29:31.680410       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:29:31.680436       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:29:31.680467       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:29:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:29:31.885010       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:29:31.885040       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:29:31.885051       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:29:31.885200       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [5ae0e51100b8fd95d4ea6bcc9d74ec6251f93fbcceeee0b2dd3cee32dada6c31] <==
	I1227 20:29:31.122479       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:31.122507       1 policy_source.go:248] refreshing policies
	I1227 20:29:31.169477       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:29:31.185688       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:29:31.185943       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:29:31.185955       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:29:31.185960       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:31.185960       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:29:31.186448       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:29:31.186653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:29:31.186744       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:29:31.190907       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:29:31.194508       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:29:31.196034       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:29:31.220980       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:29:31.405100       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:29:31.437827       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:29:31.455506       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:29:31.466136       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:29:31.507132       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.172.141"}
	I1227 20:29:31.517843       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.137.216"}
	I1227 20:29:31.989196       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:29:34.611140       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:29:34.660643       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:29:34.710438       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2468df267da64ce7026d23c356156eccc868b96989c5479a2b5dfe05bcf8f0dc] <==
	I1227 20:29:34.218649       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:34.218654       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.218659       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.219190       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:34.219356       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220503       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220578       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220588       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220826       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220874       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.221704       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.221754       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222483       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.221739       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222641       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222650       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222683       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222717       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222868       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.227036       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.229284       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.319595       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.321754       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.321770       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:29:34.321776       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492] <==
	I1227 20:29:31.551895       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:29:31.622582       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:31.723521       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:31.723562       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1227 20:29:31.723646       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:29:31.741003       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:29:31.741068       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:29:31.746096       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:29:31.746506       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:29:31.746538       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:29:31.747728       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:29:31.747748       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:29:31.747783       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:29:31.747786       1 config.go:309] "Starting node config controller"
	I1227 20:29:31.747797       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:29:31.747809       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:29:31.747782       1 config.go:200] "Starting service config controller"
	I1227 20:29:31.747817       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:29:31.747789       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:29:31.848663       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:29:31.848679       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:29:31.848709       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c126392630ac0e9cc664d34784d1f3d6649b49a4944fb5ff018e652894a5f6c] <==
	I1227 20:29:31.031978       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:29:31.034909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:29:31.035069       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:29:31.035084       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:31.035106       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:29:31.085587       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1227 20:29:31.086824       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:29:31.086990       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:29:31.087111       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:29:31.087221       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:29:31.087301       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:29:31.087365       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:29:31.087471       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:29:31.087534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:29:31.087557       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:29:31.087609       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:29:31.087629       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:29:31.087712       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:29:31.087792       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:29:31.087806       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:29:31.091504       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:29:31.091885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:29:31.092048       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:29:31.092212       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I1227 20:29:32.536099       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.211406     676 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.211568     676 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.211608     676 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.211671     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-307728\" already exists" pod="kube-system/kube-scheduler-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.211743     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-307728\" already exists" pod="kube-system/etcd-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.211758     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-307728" containerName="kube-scheduler"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.211810     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-307728" containerName="etcd"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.212494     676 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213131     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/93ba591e-f91b-4d17-bc19-0df196548fdd-cni-cfg\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213172     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93ba591e-f91b-4d17-bc19-0df196548fdd-xtables-lock\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213214     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7af7999b-ede9-4da5-8e6f-df77472e1cdd-xtables-lock\") pod \"kube-proxy-9qccb\" (UID: \"7af7999b-ede9-4da5-8e6f-df77472e1cdd\") " pod="kube-system/kube-proxy-9qccb"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213238     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7af7999b-ede9-4da5-8e6f-df77472e1cdd-lib-modules\") pod \"kube-proxy-9qccb\" (UID: \"7af7999b-ede9-4da5-8e6f-df77472e1cdd\") " pod="kube-system/kube-proxy-9qccb"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213267     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93ba591e-f91b-4d17-bc19-0df196548fdd-lib-modules\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.220006     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-307728\" already exists" pod="kube-system/kube-apiserver-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.220123     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-307728" containerName="kube-apiserver"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.220566     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-307728\" already exists" pod="kube-system/kube-controller-manager-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.220614     676 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.229846     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-307728\" already exists" pod="kube-system/kube-scheduler-newest-cni-307728"
	Dec 27 20:29:32 newest-cni-307728 kubelet[676]: E1227 20:29:32.206479     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-307728" containerName="kube-apiserver"
	Dec 27 20:29:32 newest-cni-307728 kubelet[676]: E1227 20:29:32.206531     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-307728" containerName="kube-scheduler"
	Dec 27 20:29:32 newest-cni-307728 kubelet[676]: E1227 20:29:32.206759     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-307728" containerName="kube-controller-manager"
	Dec 27 20:29:32 newest-cni-307728 kubelet[676]: E1227 20:29:32.207022     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-307728" containerName="etcd"
	Dec 27 20:29:33 newest-cni-307728 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:29:33 newest-cni-307728 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:29:33 newest-cni-307728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-307728 -n newest-cni-307728
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-307728 -n newest-cni-307728: exit status 2 (318.144591ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-307728 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-v4xtw storage-provisioner dashboard-metrics-scraper-867fb5f87b-ttpzp kubernetes-dashboard-b84665fb8-29j2c
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-307728 describe pod coredns-7d764666f9-v4xtw storage-provisioner dashboard-metrics-scraper-867fb5f87b-ttpzp kubernetes-dashboard-b84665fb8-29j2c
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-307728 describe pod coredns-7d764666f9-v4xtw storage-provisioner dashboard-metrics-scraper-867fb5f87b-ttpzp kubernetes-dashboard-b84665fb8-29j2c: exit status 1 (58.880057ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-v4xtw" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-ttpzp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-29j2c" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-307728 describe pod coredns-7d764666f9-v4xtw storage-provisioner dashboard-metrics-scraper-867fb5f87b-ttpzp kubernetes-dashboard-b84665fb8-29j2c: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-307728
helpers_test.go:244: (dbg) docker inspect newest-cni-307728:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6",
	        "Created": "2025-12-27T20:28:53.126304312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 349845,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:29:23.001526813Z",
	            "FinishedAt": "2025-12-27T20:29:22.167982725Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/hosts",
	        "LogPath": "/var/lib/docker/containers/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6/64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6-json.log",
	        "Name": "/newest-cni-307728",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-307728:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-307728",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64c609a6122eb3106f9ff66cebdbbe628910b7d3fd5cb3cb509ba3d44be3d3c6",
	                "LowerDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2be7072159fdfaae7f81698823efb92800f860bb8ccada9afcd73cde0a4096e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-307728",
	                "Source": "/var/lib/docker/volumes/newest-cni-307728/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-307728",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-307728",
	                "name.minikube.sigs.k8s.io": "newest-cni-307728",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7f4e920b267727107fc7bd54e180c7eb54feb67041423a82d8d889ed57e4d9e6",
	            "SandboxKey": "/var/run/docker/netns/7f4e920b2677",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-307728": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d45b1129d8ffab2533043d5d1454842b3b9f2cbc16e12ecfd948c089f363538",
	                    "EndpointID": "c625d4a97c4ccd755e34c0ea68af4251c2b30231a0eb0995b385ddea0060cbb8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "aa:6a:1e:bf:fe:32",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-307728",
	                        "64c609a6122e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-307728 -n newest-cni-307728
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-307728 -n newest-cni-307728: exit status 2 (315.911895ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-307728 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-954154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ delete  │ -p old-k8s-version-762177                                                                                                                                                                                                                     │ old-k8s-version-762177            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:28 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ no-preload-014435 image list --format=json                                                                                                                                                                                                    │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p no-preload-014435 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-588477 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p newest-cni-307728 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p test-preload-dl-gcs-588477                                                                                                                                                                                                                 │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-github-805734 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-github-805734                                                                                                                                                                                                              │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-275955 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-275955                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-307728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ embed-certs-820583 image list --format=json                                                                                                                                                                                                   │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p embed-certs-820583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p embed-certs-820583                                                                                                                                                                                                                         │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ newest-cni-307728 image list --format=json                                                                                                                                                                                                    │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p newest-cni-307728 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p embed-certs-820583                                                                                                                                                                                                                         │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:29:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:29:22.784538  349640 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:22.784794  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.784803  349640 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:22.784808  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.785052  349640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:22.785520  349640 out.go:368] Setting JSON to false
	I1227 20:29:22.786562  349640 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4312,"bootTime":1766863051,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:29:22.786612  349640 start.go:143] virtualization: kvm guest
	I1227 20:29:22.788250  349640 out.go:179] * [newest-cni-307728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:29:22.789332  349640 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:29:22.789351  349640 notify.go:221] Checking for updates...
	I1227 20:29:22.791442  349640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:29:22.792602  349640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:22.793592  349640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:29:22.794578  349640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:29:22.795545  349640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:29:22.796871  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:22.797487  349640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:29:22.820540  349640 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:29:22.820686  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.876976  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.867077037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.877116  349640 docker.go:319] overlay module found
	I1227 20:29:22.878722  349640 out.go:179] * Using the docker driver based on existing profile
	I1227 20:29:22.879763  349640 start.go:309] selected driver: docker
	I1227 20:29:22.879776  349640 start.go:928] validating driver "docker" against &{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.879862  349640 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:29:22.880423  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.933111  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.923700326 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.933397  349640 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:22.933437  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:22.933495  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:22.933527  349640 start.go:353] cluster config:
	{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.935838  349640 out.go:179] * Starting "newest-cni-307728" primary control-plane node in "newest-cni-307728" cluster
	I1227 20:29:22.936870  349640 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:29:22.938035  349640 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:29:22.939178  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:22.939218  349640 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:29:22.939230  349640 cache.go:65] Caching tarball of preloaded images
	I1227 20:29:22.939273  349640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:29:22.939310  349640 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:29:22.939321  349640 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:29:22.939415  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:22.958953  349640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:29:22.958973  349640 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:29:22.958989  349640 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:29:22.959021  349640 start.go:360] acquireMachinesLock for newest-cni-307728: {Name:mk68119d6288f8bd1ffbe980f508592c691efe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:29:22.959080  349640 start.go:364] duration metric: took 32.398µs to acquireMachinesLock for "newest-cni-307728"
	I1227 20:29:22.959096  349640 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:29:22.959101  349640 fix.go:54] fixHost starting: 
	I1227 20:29:22.959287  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:22.976170  349640 fix.go:112] recreateIfNeeded on newest-cni-307728: state=Stopped err=<nil>
	W1227 20:29:22.976196  349640 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 20:29:23.141106  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:25.640346  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:27.641661  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:22.977899  349640 out.go:252] * Restarting existing docker container for "newest-cni-307728" ...
	I1227 20:29:22.977965  349640 cli_runner.go:164] Run: docker start newest-cni-307728
	I1227 20:29:23.209602  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:23.228965  349640 kic.go:430] container "newest-cni-307728" state is running.
	I1227 20:29:23.229357  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:23.247657  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:23.247952  349640 machine.go:94] provisionDockerMachine start ...
	I1227 20:29:23.248040  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:23.266559  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:23.266854  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:23.266871  349640 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:29:23.267586  349640 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40356->127.0.0.1:33133: read: connection reset by peer
	I1227 20:29:26.389693  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.389724  349640 ubuntu.go:182] provisioning hostname "newest-cni-307728"
	I1227 20:29:26.389772  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.407725  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.407964  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.407977  349640 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-307728 && echo "newest-cni-307728" | sudo tee /etc/hostname
	I1227 20:29:26.537069  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.537154  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.554605  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.554823  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.554839  349640 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307728/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:29:26.675284  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:29:26.675315  349640 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:29:26.675364  349640 ubuntu.go:190] setting up certificates
	I1227 20:29:26.675387  349640 provision.go:84] configureAuth start
	I1227 20:29:26.675446  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:26.693637  349640 provision.go:143] copyHostCerts
	I1227 20:29:26.693688  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:29:26.693704  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:29:26.693768  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:29:26.693867  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:29:26.693885  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:29:26.693934  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:29:26.694025  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:29:26.694034  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:29:26.694061  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:29:26.694130  349640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-307728 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-307728]
	I1227 20:29:26.867266  349640 provision.go:177] copyRemoteCerts
	I1227 20:29:26.867338  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:29:26.867388  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.885478  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:26.980147  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:29:26.999076  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:29:27.017075  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:29:27.035083  349640 provision.go:87] duration metric: took 359.672918ms to configureAuth
	I1227 20:29:27.035111  349640 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:29:27.035327  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:27.035447  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.052793  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:27.053075  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:27.053104  349640 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:29:27.343702  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:29:27.343727  349640 machine.go:97] duration metric: took 4.095755604s to provisionDockerMachine
	I1227 20:29:27.343741  349640 start.go:293] postStartSetup for "newest-cni-307728" (driver="docker")
	I1227 20:29:27.343754  349640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:29:27.343815  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:29:27.343863  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.367256  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.461046  349640 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:29:27.464376  349640 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:29:27.464409  349640 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:29:27.464430  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:29:27.464483  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:29:27.464567  349640 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:29:27.464649  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:29:27.471953  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:29:27.488345  349640 start.go:296] duration metric: took 144.591413ms for postStartSetup
	I1227 20:29:27.488403  349640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:29:27.488434  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.506383  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.597986  349640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:29:27.602558  349640 fix.go:56] duration metric: took 4.64345174s for fixHost
	I1227 20:29:27.602585  349640 start.go:83] releasing machines lock for "newest-cni-307728", held for 4.643494258s
	I1227 20:29:27.602644  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:27.623164  349640 ssh_runner.go:195] Run: cat /version.json
	I1227 20:29:27.623225  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.623311  349640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:29:27.623401  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.644318  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.644706  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.735874  349640 ssh_runner.go:195] Run: systemctl --version
	I1227 20:29:27.796779  349640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:29:27.836209  349640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:29:27.841396  349640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:29:27.841458  349640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:29:27.849842  349640 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:29:27.849864  349640 start.go:496] detecting cgroup driver to use...
	I1227 20:29:27.849891  349640 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:29:27.850059  349640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:29:27.863872  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:29:27.876702  349640 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:29:27.876753  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:29:27.890649  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:29:27.903058  349640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:29:27.992790  349640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:29:28.078394  349640 docker.go:234] disabling docker service ...
	I1227 20:29:28.078471  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:29:28.093111  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:29:28.105866  349640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:29:28.195542  349640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:29:28.278015  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:29:28.291348  349640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:29:28.305334  349640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:29:28.305405  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.314550  349640 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:29:28.314619  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.324597  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.334691  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.346435  349640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:29:28.356445  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.366534  349640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.375089  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.384484  349640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:29:28.392136  349640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:29:28.399804  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:28.488345  349640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:29:28.627177  349640 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:29:28.627250  349640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:29:28.631981  349640 start.go:574] Will wait 60s for crictl version
	I1227 20:29:28.632034  349640 ssh_runner.go:195] Run: which crictl
	I1227 20:29:28.635757  349640 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:29:28.661999  349640 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:29:28.662074  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.692995  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.727086  349640 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:29:28.728112  349640 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:29:28.747478  349640 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 20:29:28.752558  349640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:29:28.764745  349640 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:29:28.765905  349640 kubeadm.go:884] updating cluster {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:29:28.766060  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:28.766106  349640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:29:28.806106  349640 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:29:28.806131  349640 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:29:28.806184  349640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:29:28.834446  349640 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:29:28.834465  349640 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:29:28.834473  349640 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I1227 20:29:28.834603  349640 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-307728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:29:28.834686  349640 ssh_runner.go:195] Run: crio config
	I1227 20:29:28.888266  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:28.888297  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:28.888314  349640 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:29:28.888343  349640 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-307728 NodeName:newest-cni-307728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:29:28.888514  349640 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-307728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:29:28.888582  349640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:29:28.896598  349640 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:29:28.896658  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:29:28.904048  349640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:29:28.916029  349640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:29:28.928184  349640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:29:28.940621  349640 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:29:28.944032  349640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:29:28.953826  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:29.049430  349640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:29:29.069168  349640 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728 for IP: 192.168.103.2
	I1227 20:29:29.069184  349640 certs.go:195] generating shared ca certs ...
	I1227 20:29:29.069197  349640 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.069335  349640 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:29:29.069415  349640 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:29:29.069430  349640 certs.go:257] generating profile certs ...
	I1227 20:29:29.069535  349640 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key
	I1227 20:29:29.069615  349640 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df
	I1227 20:29:29.069674  349640 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key
	I1227 20:29:29.069814  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:29:29.069857  349640 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:29:29.069870  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:29:29.069905  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:29:29.069966  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:29:29.070003  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:29:29.070061  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:29:29.070605  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:29:29.089009  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:29:29.112171  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:29:29.134212  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:29:29.158133  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:29:29.181988  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:29:29.200678  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:29:29.218007  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:29:29.235685  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:29:29.255652  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:29:29.274505  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:29:29.294020  349640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:29:29.308273  349640 ssh_runner.go:195] Run: openssl version
	I1227 20:29:29.314351  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.321706  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:29:29.329192  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.332801  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.332846  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.370829  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:29:29.378564  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.386204  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:29:29.393976  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.397479  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.397525  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.433631  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:29:29.440987  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.449024  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:29:29.457943  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.461620  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.461665  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.499185  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:29:29.506551  349640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:29:29.510965  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:29:29.551280  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:29:29.589754  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:29:29.641048  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:29:29.698248  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:29:29.757405  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:29:29.803735  349640 kubeadm.go:401] StartCluster: {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:29.803836  349640 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:29:29.803901  349640 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:29:29.835928  349640 cri.go:96] found id: "2c126392630ac0e9cc664d34784d1f3d6649b49a4944fb5ff018e652894a5f6c"
	I1227 20:29:29.835952  349640 cri.go:96] found id: "2468df267da64ce7026d23c356156eccc868b96989c5479a2b5dfe05bcf8f0dc"
	I1227 20:29:29.835959  349640 cri.go:96] found id: "5ae0e51100b8fd95d4ea6bcc9d74ec6251f93fbcceeee0b2dd3cee32dada6c31"
	I1227 20:29:29.835967  349640 cri.go:96] found id: "413cb76a28516298082c7dbccd5e654b39bcb99ebcb7579b812852e885f9f50f"
	I1227 20:29:29.835971  349640 cri.go:96] found id: ""
	I1227 20:29:29.836012  349640 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:29:29.848165  349640 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:29Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:29:29.848217  349640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:29:29.857470  349640 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:29:29.857490  349640 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:29:29.857540  349640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:29:29.865790  349640 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:29:29.866736  349640 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-307728" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:29.867255  349640 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-307728" cluster setting kubeconfig missing "newest-cni-307728" context setting]
	I1227 20:29:29.867965  349640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.869656  349640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:29:29.877605  349640 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1227 20:29:29.877634  349640 kubeadm.go:602] duration metric: took 20.137461ms to restartPrimaryControlPlane
	I1227 20:29:29.877651  349640 kubeadm.go:403] duration metric: took 73.916779ms to StartCluster
	I1227 20:29:29.877669  349640 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.877726  349640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:29.879534  349640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.879773  349640 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:29:29.880023  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:29.880084  349640 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:29:29.880164  349640 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-307728"
	I1227 20:29:29.880179  349640 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-307728"
	W1227 20:29:29.880192  349640 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:29:29.880216  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.880295  349640 addons.go:70] Setting dashboard=true in profile "newest-cni-307728"
	I1227 20:29:29.880319  349640 addons.go:70] Setting default-storageclass=true in profile "newest-cni-307728"
	I1227 20:29:29.880353  349640 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-307728"
	I1227 20:29:29.880324  349640 addons.go:239] Setting addon dashboard=true in "newest-cni-307728"
	W1227 20:29:29.880433  349640 addons.go:248] addon dashboard should already be in state true
	I1227 20:29:29.880462  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.880671  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.880672  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.880907  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.885068  349640 out.go:179] * Verifying Kubernetes components...
	I1227 20:29:29.888082  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:29.906427  349640 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:29:29.906423  349640 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:29:29.906727  349640 addons.go:239] Setting addon default-storageclass=true in "newest-cni-307728"
	W1227 20:29:29.906749  349640 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:29:29.906798  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.907308  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.908502  349640 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:29.908563  349640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:29:29.908620  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.909594  349640 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:29:29.910726  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:29:29.910750  349640 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:29:29.910812  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.939150  349640 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:29.939175  349640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:29:29.939233  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.940432  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:29.944922  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:29.977263  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:30.045064  349640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:29:30.058167  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:29:30.058191  349640 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:29:30.061437  349640 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:29:30.061487  349640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:29:30.073175  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:30.076392  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:29:30.076416  349640 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:29:30.079332  349640 api_server.go:72] duration metric: took 199.523544ms to wait for apiserver process to appear ...
	I1227 20:29:30.079356  349640 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:29:30.079373  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:30.090441  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:30.094269  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:29:30.094291  349640 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:29:30.114023  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:29:30.114046  349640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:29:30.131494  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:29:30.131515  349640 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:29:30.149541  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:29:30.149615  349640 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:29:30.167283  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:29:30.167310  349640 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:29:30.184004  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:29:30.184024  349640 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:29:30.201013  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:29:30.201038  349640 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:29:30.217298  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:29:30.994230  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:30.994265  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:30.994280  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.078728  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:31.078755  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:31.079882  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.090296  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:31.090325  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:31.580397  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.585748  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:29:31.585801  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:29:31.606183  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.532971297s)
	I1227 20:29:31.606241  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.515771231s)
	I1227 20:29:31.606358  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.389020153s)
	I1227 20:29:31.607861  349640 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-307728 addons enable metrics-server
	
	I1227 20:29:31.616813  349640 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:29:31.617978  349640 addons.go:530] duration metric: took 1.737896941s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:29:32.080229  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:32.084464  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:29:32.084506  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:29:32.580069  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:32.584664  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1227 20:29:32.585710  349640 api_server.go:141] control plane version: v1.35.0
	I1227 20:29:32.585733  349640 api_server.go:131] duration metric: took 2.506370541s to wait for apiserver health ...
	I1227 20:29:32.585741  349640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:29:32.588682  349640 system_pods.go:59] 8 kube-system pods found
	I1227 20:29:32.588707  349640 system_pods.go:61] "coredns-7d764666f9-v4xtw" [54b9ffbd-579b-483a-aa05-a65988e43aae] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:29:32.588718  349640 system_pods.go:61] "etcd-newest-cni-307728" [47c59b02-ea05-4deb-a2d5-f33fe18e738b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:29:32.588742  349640 system_pods.go:61] "kindnet-6z4tn" [93ba591e-f91b-4d17-bc19-0df196548fdd] Running
	I1227 20:29:32.588751  349640 system_pods.go:61] "kube-apiserver-newest-cni-307728" [ff05d4da-e496-4611-90a2-32a9e49a76a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:29:32.588759  349640 system_pods.go:61] "kube-controller-manager-newest-cni-307728" [98a6898f-bd6c-4bb5-97eb-767920c25375] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:29:32.588772  349640 system_pods.go:61] "kube-proxy-9qccb" [7af7999b-ede9-4da5-8e6f-df77472e1cdd] Running
	I1227 20:29:32.588778  349640 system_pods.go:61] "kube-scheduler-newest-cni-307728" [cac454d9-fa90-45da-b22c-5d0e23dc78a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:29:32.588785  349640 system_pods.go:61] "storage-provisioner" [b4c1fa65-07d5-4f68-a68b-43acd8569dd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:29:32.588790  349640 system_pods.go:74] duration metric: took 3.044295ms to wait for pod list to return data ...
	I1227 20:29:32.588801  349640 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:29:32.591068  349640 default_sa.go:45] found service account: "default"
	I1227 20:29:32.591087  349640 default_sa.go:55] duration metric: took 2.281836ms for default service account to be created ...
	I1227 20:29:32.591100  349640 kubeadm.go:587] duration metric: took 2.711295065s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:32.591132  349640 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:29:32.592996  349640 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:29:32.593016  349640 node_conditions.go:123] node cpu capacity is 8
	I1227 20:29:32.593030  349640 node_conditions.go:105] duration metric: took 1.888982ms to run NodePressure ...
	I1227 20:29:32.593046  349640 start.go:242] waiting for startup goroutines ...
	I1227 20:29:32.593062  349640 start.go:247] waiting for cluster config update ...
	I1227 20:29:32.593076  349640 start.go:256] writing updated cluster config ...
	I1227 20:29:32.593351  349640 ssh_runner.go:195] Run: rm -f paused
	I1227 20:29:32.641222  349640 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	W1227 20:29:30.142107  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:32.640365  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:32.643768  349640 out.go:179] * Done! kubectl is now configured to use "newest-cni-307728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.462410651Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.466124153Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bb488b54-d163-459b-b048-187392d9e293 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.466579302Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1b125151-53d7-40f4-83d7-405a24653846 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.468372335Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.469072689Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.469289378Z" level=info msg="Ran pod sandbox 7a115c67fcc26f8b9b7c0d3c4ff39ecd2d77c963dba308c2607612953d781d75 with infra container: kube-system/kindnet-6z4tn/POD" id=1b125151-53d7-40f4-83d7-405a24653846 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.470058751Z" level=info msg="Ran pod sandbox 2518a11b6026399f4e6fce6259a9973c4e8308aa3a28a3a40e5240ca503455c9 with infra container: kube-system/kube-proxy-9qccb/POD" id=bb488b54-d163-459b-b048-187392d9e293 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.470527031Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=582a6dcc-d254-4402-9906-434870531e9a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.471063805Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=a1ced8a8-0b29-46fb-b8b2-6b40151a8fc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.471417615Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=a6c0130c-e72a-4286-85b7-f1e5f6c2fb35 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.47194Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=567961d1-164c-46f0-a12d-1437577d461d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.472476451Z" level=info msg="Creating container: kube-system/kindnet-6z4tn/kindnet-cni" id=26427658-74c2-44ef-a7c6-1ed6eaf3c482 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.472553273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.472809766Z" level=info msg="Creating container: kube-system/kube-proxy-9qccb/kube-proxy" id=e0addffd-f391-46b7-9376-e9a42c5d3bd4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.472905063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.476882425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.477450915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.481998454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.482457619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.504050766Z" level=info msg="Created container 2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536: kube-system/kindnet-6z4tn/kindnet-cni" id=26427658-74c2-44ef-a7c6-1ed6eaf3c482 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.505017694Z" level=info msg="Starting container: 2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536" id=4dbdf15f-772c-4bbb-bc71-8f30ed2b6d6a name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.507692086Z" level=info msg="Started container" PID=1059 containerID=2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536 description=kube-system/kindnet-6z4tn/kindnet-cni id=4dbdf15f-772c-4bbb-bc71-8f30ed2b6d6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a115c67fcc26f8b9b7c0d3c4ff39ecd2d77c963dba308c2607612953d781d75
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.511886997Z" level=info msg="Created container d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492: kube-system/kube-proxy-9qccb/kube-proxy" id=e0addffd-f391-46b7-9376-e9a42c5d3bd4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.512532592Z" level=info msg="Starting container: d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492" id=5f0f5ae7-423b-474b-8338-3ded8d138bca name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:31 newest-cni-307728 crio[526]: time="2025-12-27T20:29:31.51535273Z" level=info msg="Started container" PID=1060 containerID=d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492 description=kube-system/kube-proxy-9qccb/kube-proxy id=5f0f5ae7-423b-474b-8338-3ded8d138bca name=/runtime.v1.RuntimeService/StartContainer sandboxID=2518a11b6026399f4e6fce6259a9973c4e8308aa3a28a3a40e5240ca503455c9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d4304dcfa6b0c       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   5 seconds ago       Running             kube-proxy                1                   2518a11b60263       kube-proxy-9qccb                            kube-system
	2bb35280e51b0       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   5 seconds ago       Running             kindnet-cni               1                   7a115c67fcc26       kindnet-6z4tn                               kube-system
	2c126392630ac       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   7 seconds ago       Running             kube-scheduler            1                   dbe168cb7660b       kube-scheduler-newest-cni-307728            kube-system
	2468df267da64       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   7 seconds ago       Running             kube-controller-manager   1                   36cedc9f244ae       kube-controller-manager-newest-cni-307728   kube-system
	5ae0e51100b8f       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   7 seconds ago       Running             kube-apiserver            1                   69455ecafaf9d       kube-apiserver-newest-cni-307728            kube-system
	413cb76a28516       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   7 seconds ago       Running             etcd                      1                   44109c166e1f5       etcd-newest-cni-307728                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-307728
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-307728
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=newest-cni-307728
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_29_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:29:01 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-307728
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:29:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:29:31 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:29:31 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:29:31 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 20:29:31 +0000   Sat, 27 Dec 2025 20:29:00 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-307728
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                b8493783-f7be-4c30-8a0f-ec2eeceb6491
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-307728                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-6z4tn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-307728             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-307728    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-9qccb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-307728             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node newest-cni-307728 event: Registered Node newest-cni-307728 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-307728 event: Registered Node newest-cni-307728 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [413cb76a28516298082c7dbccd5e654b39bcb99ebcb7579b812852e885f9f50f] <==
	{"level":"info","ts":"2025-12-27T20:29:29.759186Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:29:29.759267Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:29:29.759334Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:29:29.759702Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:29:29.759737Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:29:29.759693Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:29:29.759813Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-27T20:29:29.950139Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:29.950196Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:29.950274Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-27T20:29:29.950288Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:29:29.950307Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:29:29.952084Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T20:29:29.952719Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:29:29.953626Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:29:29.953677Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-27T20:29:29.959426Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:newest-cni-307728 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:29:29.959524Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:29:29.959549Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:29:29.960289Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:29:29.961996Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:29:29.961507Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:29:29.964905Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:29:29.965967Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:29:29.968121Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 20:29:37 up  1:12,  0 user,  load average: 2.80, 3.08, 2.26
	Linux newest-cni-307728 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2bb35280e51b042e5f2be8f726a1af43521b6309545b35ce7d6a54505ce3a536] <==
	I1227 20:29:31.679982       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:29:31.680265       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1227 20:29:31.680410       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:29:31.680436       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:29:31.680467       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:29:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:29:31.885010       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:29:31.885040       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:29:31.885051       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:29:31.885200       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [5ae0e51100b8fd95d4ea6bcc9d74ec6251f93fbcceeee0b2dd3cee32dada6c31] <==
	I1227 20:29:31.122479       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:31.122507       1 policy_source.go:248] refreshing policies
	I1227 20:29:31.169477       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:29:31.185688       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:29:31.185943       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:29:31.185955       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:29:31.185960       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:31.185960       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:29:31.186448       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:29:31.186653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:29:31.186744       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:29:31.190907       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:29:31.194508       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:29:31.196034       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:29:31.220980       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:29:31.405100       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:29:31.437827       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:29:31.455506       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:29:31.466136       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:29:31.507132       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.172.141"}
	I1227 20:29:31.517843       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.137.216"}
	I1227 20:29:31.989196       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:29:34.611140       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:29:34.660643       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:29:34.710438       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2468df267da64ce7026d23c356156eccc868b96989c5479a2b5dfe05bcf8f0dc] <==
	I1227 20:29:34.218649       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:34.218654       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.218659       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.219190       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:34.219356       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220503       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220578       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220588       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220826       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.220874       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.221704       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.221754       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222483       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.221739       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222641       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222650       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222683       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222717       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.222868       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.227036       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.229284       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.319595       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.321754       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:34.321770       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:29:34.321776       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [d4304dcfa6b0cb24706eccf29e75d7f268dee486b4abfa73bbb8fcf240c93492] <==
	I1227 20:29:31.551895       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:29:31.622582       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:31.723521       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:31.723562       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1227 20:29:31.723646       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:29:31.741003       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:29:31.741068       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:29:31.746096       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:29:31.746506       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:29:31.746538       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:29:31.747728       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:29:31.747748       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:29:31.747783       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:29:31.747786       1 config.go:309] "Starting node config controller"
	I1227 20:29:31.747797       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:29:31.747809       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:29:31.747782       1 config.go:200] "Starting service config controller"
	I1227 20:29:31.747817       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:29:31.747789       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:29:31.848663       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:29:31.848679       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:29:31.848709       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c126392630ac0e9cc664d34784d1f3d6649b49a4944fb5ff018e652894a5f6c] <==
	I1227 20:29:31.031978       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:29:31.034909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:29:31.035069       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:29:31.035084       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:31.035106       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:29:31.085587       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1227 20:29:31.086824       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:29:31.086990       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:29:31.087111       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:29:31.087221       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:29:31.087301       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:29:31.087365       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:29:31.087471       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:29:31.087534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:29:31.087557       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:29:31.087609       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:29:31.087629       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:29:31.087712       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:29:31.087792       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:29:31.087806       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:29:31.091504       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:29:31.091885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:29:31.092048       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:29:31.092212       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I1227 20:29:32.536099       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.211406     676 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.211568     676 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.211608     676 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.211671     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-307728\" already exists" pod="kube-system/kube-scheduler-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.211743     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-307728\" already exists" pod="kube-system/etcd-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.211758     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-307728" containerName="kube-scheduler"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.211810     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-307728" containerName="etcd"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.212494     676 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213131     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/93ba591e-f91b-4d17-bc19-0df196548fdd-cni-cfg\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213172     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93ba591e-f91b-4d17-bc19-0df196548fdd-xtables-lock\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213214     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7af7999b-ede9-4da5-8e6f-df77472e1cdd-xtables-lock\") pod \"kube-proxy-9qccb\" (UID: \"7af7999b-ede9-4da5-8e6f-df77472e1cdd\") " pod="kube-system/kube-proxy-9qccb"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213238     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7af7999b-ede9-4da5-8e6f-df77472e1cdd-lib-modules\") pod \"kube-proxy-9qccb\" (UID: \"7af7999b-ede9-4da5-8e6f-df77472e1cdd\") " pod="kube-system/kube-proxy-9qccb"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.213267     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93ba591e-f91b-4d17-bc19-0df196548fdd-lib-modules\") pod \"kindnet-6z4tn\" (UID: \"93ba591e-f91b-4d17-bc19-0df196548fdd\") " pod="kube-system/kindnet-6z4tn"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.220006     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-307728\" already exists" pod="kube-system/kube-apiserver-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.220123     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-307728" containerName="kube-apiserver"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.220566     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-307728\" already exists" pod="kube-system/kube-controller-manager-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: I1227 20:29:31.220614     676 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-307728"
	Dec 27 20:29:31 newest-cni-307728 kubelet[676]: E1227 20:29:31.229846     676 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-307728\" already exists" pod="kube-system/kube-scheduler-newest-cni-307728"
	Dec 27 20:29:32 newest-cni-307728 kubelet[676]: E1227 20:29:32.206479     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-307728" containerName="kube-apiserver"
	Dec 27 20:29:32 newest-cni-307728 kubelet[676]: E1227 20:29:32.206531     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-307728" containerName="kube-scheduler"
	Dec 27 20:29:32 newest-cni-307728 kubelet[676]: E1227 20:29:32.206759     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-307728" containerName="kube-controller-manager"
	Dec 27 20:29:32 newest-cni-307728 kubelet[676]: E1227 20:29:32.207022     676 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-307728" containerName="etcd"
	Dec 27 20:29:33 newest-cni-307728 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:29:33 newest-cni-307728 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:29:33 newest-cni-307728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-307728 -n newest-cni-307728
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-307728 -n newest-cni-307728: exit status 2 (316.811641ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-307728 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-v4xtw storage-provisioner dashboard-metrics-scraper-867fb5f87b-ttpzp kubernetes-dashboard-b84665fb8-29j2c
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-307728 describe pod coredns-7d764666f9-v4xtw storage-provisioner dashboard-metrics-scraper-867fb5f87b-ttpzp kubernetes-dashboard-b84665fb8-29j2c
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-307728 describe pod coredns-7d764666f9-v4xtw storage-provisioner dashboard-metrics-scraper-867fb5f87b-ttpzp kubernetes-dashboard-b84665fb8-29j2c: exit status 1 (57.597955ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-v4xtw" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-ttpzp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-29j2c" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-307728 describe pod coredns-7d764666f9-v4xtw storage-provisioner dashboard-metrics-scraper-867fb5f87b-ttpzp kubernetes-dashboard-b84665fb8-29j2c: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (4.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-954154 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-954154 --alsologtostderr -v=1: exit status 80 (1.571139006s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-954154 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:29:51.281461  355953 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:51.281943  355953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:51.281959  355953 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:51.281967  355953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:51.282421  355953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:51.283011  355953 out.go:368] Setting JSON to false
	I1227 20:29:51.283067  355953 mustload.go:66] Loading cluster: default-k8s-diff-port-954154
	I1227 20:29:51.283461  355953 config.go:182] Loaded profile config "default-k8s-diff-port-954154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:51.283891  355953 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-954154 --format={{.State.Status}}
	I1227 20:29:51.302112  355953 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:29:51.302400  355953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:51.360358  355953 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-27 20:29:51.349350719 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:51.361071  355953 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-954154 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:29:51.362775  355953 out.go:179] * Pausing node default-k8s-diff-port-954154 ... 
	I1227 20:29:51.363820  355953 host.go:66] Checking if "default-k8s-diff-port-954154" exists ...
	I1227 20:29:51.364127  355953 ssh_runner.go:195] Run: systemctl --version
	I1227 20:29:51.364173  355953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-954154
	I1227 20:29:51.381515  355953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/default-k8s-diff-port-954154/id_rsa Username:docker}
	I1227 20:29:51.469365  355953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:51.495393  355953 pause.go:52] kubelet running: true
	I1227 20:29:51.495485  355953 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:51.651523  355953 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:51.651620  355953 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:51.714465  355953 cri.go:96] found id: "2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901"
	I1227 20:29:51.714483  355953 cri.go:96] found id: "a235d8e194a333df66ad83d10b5899176171cd0d7c0c95256c8864cb76d3b1c2"
	I1227 20:29:51.714487  355953 cri.go:96] found id: "9534087ad19d0cf1c6a64a0fc06e25e8871c31789bd13e3b1daa949f660f0cb3"
	I1227 20:29:51.714490  355953 cri.go:96] found id: "90d96883673c78a13b5329162bd2a7485dcb13c728e7539421116adef8f9b6c4"
	I1227 20:29:51.714493  355953 cri.go:96] found id: "a83814d26e9fe52126c4d08033b6e1e1f2a478f9db8a48f3f69ebb4c0202e7d1"
	I1227 20:29:51.714499  355953 cri.go:96] found id: "5931439a8c0a571b5456aa8ff89ec7efa4a328c297d4825276b5ce8da13b99a6"
	I1227 20:29:51.714502  355953 cri.go:96] found id: "706a22c5fabaf1ee5036f9f7e94b4c7a02acaeb2002009e6416b6682929512f2"
	I1227 20:29:51.714505  355953 cri.go:96] found id: "0959afbe1a9958167e4657feae2ee275c220e183182d01f3bfb3c248002f75ad"
	I1227 20:29:51.714507  355953 cri.go:96] found id: "8b0860861ddb032e4078fd3575653ab5f5a77cc25e17d9ecb7910330acbef6e8"
	I1227 20:29:51.714514  355953 cri.go:96] found id: "aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee"
	I1227 20:29:51.714517  355953 cri.go:96] found id: "ab53c9c42b91e3b26fe7869e87d99f9ffa94077f731d37a6fd683cc5012d55de"
	I1227 20:29:51.714520  355953 cri.go:96] found id: ""
	I1227 20:29:51.714556  355953 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:51.725808  355953 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:51Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:29:52.038183  355953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:52.050465  355953 pause.go:52] kubelet running: false
	I1227 20:29:52.050521  355953 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:52.191332  355953 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:52.191397  355953 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:52.255753  355953 cri.go:96] found id: "2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901"
	I1227 20:29:52.255778  355953 cri.go:96] found id: "a235d8e194a333df66ad83d10b5899176171cd0d7c0c95256c8864cb76d3b1c2"
	I1227 20:29:52.255785  355953 cri.go:96] found id: "9534087ad19d0cf1c6a64a0fc06e25e8871c31789bd13e3b1daa949f660f0cb3"
	I1227 20:29:52.255789  355953 cri.go:96] found id: "90d96883673c78a13b5329162bd2a7485dcb13c728e7539421116adef8f9b6c4"
	I1227 20:29:52.255794  355953 cri.go:96] found id: "a83814d26e9fe52126c4d08033b6e1e1f2a478f9db8a48f3f69ebb4c0202e7d1"
	I1227 20:29:52.255800  355953 cri.go:96] found id: "5931439a8c0a571b5456aa8ff89ec7efa4a328c297d4825276b5ce8da13b99a6"
	I1227 20:29:52.255805  355953 cri.go:96] found id: "706a22c5fabaf1ee5036f9f7e94b4c7a02acaeb2002009e6416b6682929512f2"
	I1227 20:29:52.255810  355953 cri.go:96] found id: "0959afbe1a9958167e4657feae2ee275c220e183182d01f3bfb3c248002f75ad"
	I1227 20:29:52.255815  355953 cri.go:96] found id: "8b0860861ddb032e4078fd3575653ab5f5a77cc25e17d9ecb7910330acbef6e8"
	I1227 20:29:52.255824  355953 cri.go:96] found id: "aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee"
	I1227 20:29:52.255828  355953 cri.go:96] found id: "ab53c9c42b91e3b26fe7869e87d99f9ffa94077f731d37a6fd683cc5012d55de"
	I1227 20:29:52.255833  355953 cri.go:96] found id: ""
	I1227 20:29:52.255884  355953 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:52.564630  355953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:29:52.576996  355953 pause.go:52] kubelet running: false
	I1227 20:29:52.577072  355953 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:29:52.716608  355953 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:29:52.716696  355953 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:29:52.777975  355953 cri.go:96] found id: "2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901"
	I1227 20:29:52.777994  355953 cri.go:96] found id: "a235d8e194a333df66ad83d10b5899176171cd0d7c0c95256c8864cb76d3b1c2"
	I1227 20:29:52.778000  355953 cri.go:96] found id: "9534087ad19d0cf1c6a64a0fc06e25e8871c31789bd13e3b1daa949f660f0cb3"
	I1227 20:29:52.778005  355953 cri.go:96] found id: "90d96883673c78a13b5329162bd2a7485dcb13c728e7539421116adef8f9b6c4"
	I1227 20:29:52.778009  355953 cri.go:96] found id: "a83814d26e9fe52126c4d08033b6e1e1f2a478f9db8a48f3f69ebb4c0202e7d1"
	I1227 20:29:52.778015  355953 cri.go:96] found id: "5931439a8c0a571b5456aa8ff89ec7efa4a328c297d4825276b5ce8da13b99a6"
	I1227 20:29:52.778019  355953 cri.go:96] found id: "706a22c5fabaf1ee5036f9f7e94b4c7a02acaeb2002009e6416b6682929512f2"
	I1227 20:29:52.778023  355953 cri.go:96] found id: "0959afbe1a9958167e4657feae2ee275c220e183182d01f3bfb3c248002f75ad"
	I1227 20:29:52.778027  355953 cri.go:96] found id: "8b0860861ddb032e4078fd3575653ab5f5a77cc25e17d9ecb7910330acbef6e8"
	I1227 20:29:52.778036  355953 cri.go:96] found id: "aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee"
	I1227 20:29:52.778041  355953 cri.go:96] found id: "ab53c9c42b91e3b26fe7869e87d99f9ffa94077f731d37a6fd683cc5012d55de"
	I1227 20:29:52.778045  355953 cri.go:96] found id: ""
	I1227 20:29:52.778100  355953 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:29:52.791672  355953 out.go:203] 
	W1227 20:29:52.792941  355953 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:29:52.792961  355953 out.go:285] * 
	* 
	W1227 20:29:52.794696  355953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:29:52.795774  355953 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-954154 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-954154
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-954154:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987",
	        "Created": "2025-12-27T20:27:45.398813644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 340254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:28:47.989059913Z",
	            "FinishedAt": "2025-12-27T20:28:46.736762371Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/hostname",
	        "HostsPath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/hosts",
	        "LogPath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987-json.log",
	        "Name": "/default-k8s-diff-port-954154",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-954154:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-954154",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987",
	                "LowerDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-954154",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-954154/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-954154",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-954154",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-954154",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9aa0c984b9f3af0dc980b243a73d65937f76bb177f16789189e7fd703e5173dd",
	            "SandboxKey": "/var/run/docker/netns/9aa0c984b9f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-954154": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb8ec9ff71cd755e87cbf3d8e42ebf773088a83f754b577a011fbcdb7983e0c",
	                    "EndpointID": "cece27508983d2d015abcfe8772b4fcd9ed787548fb1f99e115fa3e14523301c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "52:3d:92:ca:93:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-954154",
	                        "c38cf1a04b3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154: exit status 2 (313.46033ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-954154 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-954154 logs -n 25: (1.020200698s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ no-preload-014435 image list --format=json                                                                                                                                                                                                    │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p no-preload-014435 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-588477 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p newest-cni-307728 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p test-preload-dl-gcs-588477                                                                                                                                                                                                                 │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-github-805734 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-github-805734                                                                                                                                                                                                              │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-275955 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-275955                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-307728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ embed-certs-820583 image list --format=json                                                                                                                                                                                                   │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p embed-certs-820583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p embed-certs-820583                                                                                                                                                                                                                         │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ newest-cni-307728 image list --format=json                                                                                                                                                                                                    │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p newest-cni-307728 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p embed-certs-820583                                                                                                                                                                                                                         │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p newest-cni-307728                                                                                                                                                                                                                          │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p newest-cni-307728                                                                                                                                                                                                                          │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ default-k8s-diff-port-954154 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p default-k8s-diff-port-954154 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:29:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:29:22.784538  349640 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:22.784794  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.784803  349640 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:22.784808  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.785052  349640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:22.785520  349640 out.go:368] Setting JSON to false
	I1227 20:29:22.786562  349640 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4312,"bootTime":1766863051,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:29:22.786612  349640 start.go:143] virtualization: kvm guest
	I1227 20:29:22.788250  349640 out.go:179] * [newest-cni-307728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:29:22.789332  349640 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:29:22.789351  349640 notify.go:221] Checking for updates...
	I1227 20:29:22.791442  349640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:29:22.792602  349640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:22.793592  349640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:29:22.794578  349640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:29:22.795545  349640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:29:22.796871  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:22.797487  349640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:29:22.820540  349640 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:29:22.820686  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.876976  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.867077037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.877116  349640 docker.go:319] overlay module found
	I1227 20:29:22.878722  349640 out.go:179] * Using the docker driver based on existing profile
	I1227 20:29:22.879763  349640 start.go:309] selected driver: docker
	I1227 20:29:22.879776  349640 start.go:928] validating driver "docker" against &{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.879862  349640 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:29:22.880423  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.933111  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.923700326 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.933397  349640 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:22.933437  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:22.933495  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:22.933527  349640 start.go:353] cluster config:
	{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.935838  349640 out.go:179] * Starting "newest-cni-307728" primary control-plane node in "newest-cni-307728" cluster
	I1227 20:29:22.936870  349640 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:29:22.938035  349640 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:29:22.939178  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:22.939218  349640 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:29:22.939230  349640 cache.go:65] Caching tarball of preloaded images
	I1227 20:29:22.939273  349640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:29:22.939310  349640 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:29:22.939321  349640 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:29:22.939415  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:22.958953  349640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:29:22.958973  349640 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:29:22.958989  349640 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:29:22.959021  349640 start.go:360] acquireMachinesLock for newest-cni-307728: {Name:mk68119d6288f8bd1ffbe980f508592c691efe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:29:22.959080  349640 start.go:364] duration metric: took 32.398µs to acquireMachinesLock for "newest-cni-307728"
	I1227 20:29:22.959096  349640 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:29:22.959101  349640 fix.go:54] fixHost starting: 
	I1227 20:29:22.959287  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:22.976170  349640 fix.go:112] recreateIfNeeded on newest-cni-307728: state=Stopped err=<nil>
	W1227 20:29:22.976196  349640 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 20:29:23.141106  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:25.640346  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:27.641661  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:22.977899  349640 out.go:252] * Restarting existing docker container for "newest-cni-307728" ...
	I1227 20:29:22.977965  349640 cli_runner.go:164] Run: docker start newest-cni-307728
	I1227 20:29:23.209602  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:23.228965  349640 kic.go:430] container "newest-cni-307728" state is running.
	I1227 20:29:23.229357  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:23.247657  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:23.247952  349640 machine.go:94] provisionDockerMachine start ...
	I1227 20:29:23.248040  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:23.266559  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:23.266854  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:23.266871  349640 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:29:23.267586  349640 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40356->127.0.0.1:33133: read: connection reset by peer
	I1227 20:29:26.389693  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.389724  349640 ubuntu.go:182] provisioning hostname "newest-cni-307728"
	I1227 20:29:26.389772  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.407725  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.407964  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.407977  349640 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-307728 && echo "newest-cni-307728" | sudo tee /etc/hostname
	I1227 20:29:26.537069  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.537154  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.554605  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.554823  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.554839  349640 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307728/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:29:26.675284  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:29:26.675315  349640 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:29:26.675364  349640 ubuntu.go:190] setting up certificates
	I1227 20:29:26.675387  349640 provision.go:84] configureAuth start
	I1227 20:29:26.675446  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:26.693637  349640 provision.go:143] copyHostCerts
	I1227 20:29:26.693688  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:29:26.693704  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:29:26.693768  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:29:26.693867  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:29:26.693885  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:29:26.693934  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:29:26.694025  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:29:26.694034  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:29:26.694061  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:29:26.694130  349640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-307728 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-307728]
	I1227 20:29:26.867266  349640 provision.go:177] copyRemoteCerts
	I1227 20:29:26.867338  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:29:26.867388  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.885478  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:26.980147  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:29:26.999076  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:29:27.017075  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:29:27.035083  349640 provision.go:87] duration metric: took 359.672918ms to configureAuth
	I1227 20:29:27.035111  349640 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:29:27.035327  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:27.035447  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.052793  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:27.053075  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:27.053104  349640 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:29:27.343702  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:29:27.343727  349640 machine.go:97] duration metric: took 4.095755604s to provisionDockerMachine
	I1227 20:29:27.343741  349640 start.go:293] postStartSetup for "newest-cni-307728" (driver="docker")
	I1227 20:29:27.343754  349640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:29:27.343815  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:29:27.343863  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.367256  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.461046  349640 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:29:27.464376  349640 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:29:27.464409  349640 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:29:27.464430  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:29:27.464483  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:29:27.464567  349640 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:29:27.464649  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:29:27.471953  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:29:27.488345  349640 start.go:296] duration metric: took 144.591413ms for postStartSetup
	I1227 20:29:27.488403  349640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:29:27.488434  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.506383  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.597986  349640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:29:27.602558  349640 fix.go:56] duration metric: took 4.64345174s for fixHost
	I1227 20:29:27.602585  349640 start.go:83] releasing machines lock for "newest-cni-307728", held for 4.643494258s
	I1227 20:29:27.602644  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:27.623164  349640 ssh_runner.go:195] Run: cat /version.json
	I1227 20:29:27.623225  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.623311  349640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:29:27.623401  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.644318  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.644706  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.735874  349640 ssh_runner.go:195] Run: systemctl --version
	I1227 20:29:27.796779  349640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:29:27.836209  349640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:29:27.841396  349640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:29:27.841458  349640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:29:27.849842  349640 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:29:27.849864  349640 start.go:496] detecting cgroup driver to use...
	I1227 20:29:27.849891  349640 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:29:27.850059  349640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:29:27.863872  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:29:27.876702  349640 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:29:27.876753  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:29:27.890649  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:29:27.903058  349640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:29:27.992790  349640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:29:28.078394  349640 docker.go:234] disabling docker service ...
	I1227 20:29:28.078471  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:29:28.093111  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:29:28.105866  349640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:29:28.195542  349640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:29:28.278015  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:29:28.291348  349640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:29:28.305334  349640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:29:28.305405  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.314550  349640 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:29:28.314619  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.324597  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.334691  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.346435  349640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:29:28.356445  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.366534  349640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.375089  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.384484  349640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:29:28.392136  349640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:29:28.399804  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:28.488345  349640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:29:28.627177  349640 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:29:28.627250  349640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:29:28.631981  349640 start.go:574] Will wait 60s for crictl version
	I1227 20:29:28.632034  349640 ssh_runner.go:195] Run: which crictl
	I1227 20:29:28.635757  349640 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:29:28.661999  349640 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:29:28.662074  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.692995  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.727086  349640 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:29:28.728112  349640 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:29:28.747478  349640 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 20:29:28.752558  349640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:29:28.764745  349640 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:29:28.765905  349640 kubeadm.go:884] updating cluster {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:29:28.766060  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:28.766106  349640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:29:28.806106  349640 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:29:28.806131  349640 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:29:28.806184  349640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:29:28.834446  349640 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:29:28.834465  349640 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:29:28.834473  349640 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I1227 20:29:28.834603  349640 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-307728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:29:28.834686  349640 ssh_runner.go:195] Run: crio config
	I1227 20:29:28.888266  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:28.888297  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:28.888314  349640 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:29:28.888343  349640 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-307728 NodeName:newest-cni-307728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:29:28.888514  349640 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-307728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:29:28.888582  349640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:29:28.896598  349640 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:29:28.896658  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:29:28.904048  349640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:29:28.916029  349640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:29:28.928184  349640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:29:28.940621  349640 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:29:28.944032  349640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:29:28.953826  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:29.049430  349640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:29:29.069168  349640 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728 for IP: 192.168.103.2
	I1227 20:29:29.069184  349640 certs.go:195] generating shared ca certs ...
	I1227 20:29:29.069197  349640 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.069335  349640 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:29:29.069415  349640 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:29:29.069430  349640 certs.go:257] generating profile certs ...
	I1227 20:29:29.069535  349640 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key
	I1227 20:29:29.069615  349640 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df
	I1227 20:29:29.069674  349640 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key
	I1227 20:29:29.069814  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:29:29.069857  349640 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:29:29.069870  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:29:29.069905  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:29:29.069966  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:29:29.070003  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:29:29.070061  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:29:29.070605  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:29:29.089009  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:29:29.112171  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:29:29.134212  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:29:29.158133  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:29:29.181988  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:29:29.200678  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:29:29.218007  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:29:29.235685  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:29:29.255652  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:29:29.274505  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:29:29.294020  349640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:29:29.308273  349640 ssh_runner.go:195] Run: openssl version
	I1227 20:29:29.314351  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.321706  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:29:29.329192  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.332801  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.332846  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.370829  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:29:29.378564  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.386204  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:29:29.393976  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.397479  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.397525  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.433631  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:29:29.440987  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.449024  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:29:29.457943  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.461620  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.461665  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.499185  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:29:29.506551  349640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:29:29.510965  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:29:29.551280  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:29:29.589754  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:29:29.641048  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:29:29.698248  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:29:29.757405  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:29:29.803735  349640 kubeadm.go:401] StartCluster: {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:29.803836  349640 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:29:29.803901  349640 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:29:29.835928  349640 cri.go:96] found id: "2c126392630ac0e9cc664d34784d1f3d6649b49a4944fb5ff018e652894a5f6c"
	I1227 20:29:29.835952  349640 cri.go:96] found id: "2468df267da64ce7026d23c356156eccc868b96989c5479a2b5dfe05bcf8f0dc"
	I1227 20:29:29.835959  349640 cri.go:96] found id: "5ae0e51100b8fd95d4ea6bcc9d74ec6251f93fbcceeee0b2dd3cee32dada6c31"
	I1227 20:29:29.835967  349640 cri.go:96] found id: "413cb76a28516298082c7dbccd5e654b39bcb99ebcb7579b812852e885f9f50f"
	I1227 20:29:29.835971  349640 cri.go:96] found id: ""
	I1227 20:29:29.836012  349640 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:29:29.848165  349640 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:29Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:29:29.848217  349640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:29:29.857470  349640 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:29:29.857490  349640 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:29:29.857540  349640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:29:29.865790  349640 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:29:29.866736  349640 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-307728" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:29.867255  349640 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-307728" cluster setting kubeconfig missing "newest-cni-307728" context setting]
	I1227 20:29:29.867965  349640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.869656  349640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:29:29.877605  349640 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1227 20:29:29.877634  349640 kubeadm.go:602] duration metric: took 20.137461ms to restartPrimaryControlPlane
	I1227 20:29:29.877651  349640 kubeadm.go:403] duration metric: took 73.916779ms to StartCluster
	I1227 20:29:29.877669  349640 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.877726  349640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:29.879534  349640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.879773  349640 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:29:29.880023  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:29.880084  349640 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:29:29.880164  349640 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-307728"
	I1227 20:29:29.880179  349640 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-307728"
	W1227 20:29:29.880192  349640 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:29:29.880216  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.880295  349640 addons.go:70] Setting dashboard=true in profile "newest-cni-307728"
	I1227 20:29:29.880319  349640 addons.go:70] Setting default-storageclass=true in profile "newest-cni-307728"
	I1227 20:29:29.880353  349640 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-307728"
	I1227 20:29:29.880324  349640 addons.go:239] Setting addon dashboard=true in "newest-cni-307728"
	W1227 20:29:29.880433  349640 addons.go:248] addon dashboard should already be in state true
	I1227 20:29:29.880462  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.880671  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.880672  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.880907  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.885068  349640 out.go:179] * Verifying Kubernetes components...
	I1227 20:29:29.888082  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:29.906427  349640 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:29:29.906423  349640 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:29:29.906727  349640 addons.go:239] Setting addon default-storageclass=true in "newest-cni-307728"
	W1227 20:29:29.906749  349640 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:29:29.906798  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.907308  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.908502  349640 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:29.908563  349640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:29:29.908620  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.909594  349640 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:29:29.910726  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:29:29.910750  349640 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:29:29.910812  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.939150  349640 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:29.939175  349640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:29:29.939233  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.940432  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:29.944922  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:29.977263  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:30.045064  349640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:29:30.058167  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:29:30.058191  349640 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:29:30.061437  349640 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:29:30.061487  349640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:29:30.073175  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:30.076392  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:29:30.076416  349640 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:29:30.079332  349640 api_server.go:72] duration metric: took 199.523544ms to wait for apiserver process to appear ...
	I1227 20:29:30.079356  349640 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:29:30.079373  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:30.090441  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:30.094269  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:29:30.094291  349640 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:29:30.114023  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:29:30.114046  349640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:29:30.131494  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:29:30.131515  349640 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:29:30.149541  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:29:30.149615  349640 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:29:30.167283  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:29:30.167310  349640 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:29:30.184004  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:29:30.184024  349640 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:29:30.201013  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:29:30.201038  349640 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:29:30.217298  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:29:30.994230  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:30.994265  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:30.994280  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.078728  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:31.078755  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:31.079882  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.090296  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:31.090325  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:31.580397  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.585748  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:29:31.585801  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:29:31.606183  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.532971297s)
	I1227 20:29:31.606241  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.515771231s)
	I1227 20:29:31.606358  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.389020153s)
	I1227 20:29:31.607861  349640 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-307728 addons enable metrics-server
	
	I1227 20:29:31.616813  349640 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:29:31.617978  349640 addons.go:530] duration metric: took 1.737896941s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:29:32.080229  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:32.084464  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:29:32.084506  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:29:32.580069  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:32.584664  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1227 20:29:32.585710  349640 api_server.go:141] control plane version: v1.35.0
	I1227 20:29:32.585733  349640 api_server.go:131] duration metric: took 2.506370541s to wait for apiserver health ...
	I1227 20:29:32.585741  349640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:29:32.588682  349640 system_pods.go:59] 8 kube-system pods found
	I1227 20:29:32.588707  349640 system_pods.go:61] "coredns-7d764666f9-v4xtw" [54b9ffbd-579b-483a-aa05-a65988e43aae] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:29:32.588718  349640 system_pods.go:61] "etcd-newest-cni-307728" [47c59b02-ea05-4deb-a2d5-f33fe18e738b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:29:32.588742  349640 system_pods.go:61] "kindnet-6z4tn" [93ba591e-f91b-4d17-bc19-0df196548fdd] Running
	I1227 20:29:32.588751  349640 system_pods.go:61] "kube-apiserver-newest-cni-307728" [ff05d4da-e496-4611-90a2-32a9e49a76a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:29:32.588759  349640 system_pods.go:61] "kube-controller-manager-newest-cni-307728" [98a6898f-bd6c-4bb5-97eb-767920c25375] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:29:32.588772  349640 system_pods.go:61] "kube-proxy-9qccb" [7af7999b-ede9-4da5-8e6f-df77472e1cdd] Running
	I1227 20:29:32.588778  349640 system_pods.go:61] "kube-scheduler-newest-cni-307728" [cac454d9-fa90-45da-b22c-5d0e23dc78a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:29:32.588785  349640 system_pods.go:61] "storage-provisioner" [b4c1fa65-07d5-4f68-a68b-43acd8569dd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:29:32.588790  349640 system_pods.go:74] duration metric: took 3.044295ms to wait for pod list to return data ...
	I1227 20:29:32.588801  349640 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:29:32.591068  349640 default_sa.go:45] found service account: "default"
	I1227 20:29:32.591087  349640 default_sa.go:55] duration metric: took 2.281836ms for default service account to be created ...
	I1227 20:29:32.591100  349640 kubeadm.go:587] duration metric: took 2.711295065s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:32.591132  349640 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:29:32.592996  349640 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:29:32.593016  349640 node_conditions.go:123] node cpu capacity is 8
	I1227 20:29:32.593030  349640 node_conditions.go:105] duration metric: took 1.888982ms to run NodePressure ...
	I1227 20:29:32.593046  349640 start.go:242] waiting for startup goroutines ...
	I1227 20:29:32.593062  349640 start.go:247] waiting for cluster config update ...
	I1227 20:29:32.593076  349640 start.go:256] writing updated cluster config ...
	I1227 20:29:32.593351  349640 ssh_runner.go:195] Run: rm -f paused
	I1227 20:29:32.641222  349640 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	W1227 20:29:30.142107  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:32.640365  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:32.643768  349640 out.go:179] * Done! kubectl is now configured to use "newest-cni-307728" cluster and "default" namespace by default
	W1227 20:29:35.140008  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:37.140375  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:38.142882  340025 pod_ready.go:94] pod "coredns-7d764666f9-gtzdb" is "Ready"
	I1227 20:29:38.142921  340025 pod_ready.go:86] duration metric: took 39.50828616s for pod "coredns-7d764666f9-gtzdb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.154297  340025 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.162018  340025 pod_ready.go:94] pod "etcd-default-k8s-diff-port-954154" is "Ready"
	I1227 20:29:38.162051  340025 pod_ready.go:86] duration metric: took 7.724693ms for pod "etcd-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.252005  340025 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.256448  340025 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-954154" is "Ready"
	I1227 20:29:38.256469  340025 pod_ready.go:86] duration metric: took 4.441152ms for pod "kube-apiserver-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.258219  340025 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.339067  340025 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-954154" is "Ready"
	I1227 20:29:38.339093  340025 pod_ready.go:86] duration metric: took 80.855659ms for pod "kube-controller-manager-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.540066  340025 pod_ready.go:83] waiting for pod "kube-proxy-m5zcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.939011  340025 pod_ready.go:94] pod "kube-proxy-m5zcc" is "Ready"
	I1227 20:29:38.939036  340025 pod_ready.go:86] duration metric: took 398.941518ms for pod "kube-proxy-m5zcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:39.138935  340025 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:39.539151  340025 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-954154" is "Ready"
	I1227 20:29:39.539183  340025 pod_ready.go:86] duration metric: took 400.221359ms for pod "kube-scheduler-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:39.539199  340025 pod_ready.go:40] duration metric: took 40.907731262s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:29:39.585976  340025 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:29:39.587469  340025 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-954154" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:29:19 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:19.023663042Z" level=info msg="Started container" PID=1768 containerID=2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper id=583207b1-7eaf-4995-be6b-91980d347912 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbe597bbd1c4320be908e3a167ad815e83f13c8c3051e6604c8cd09fc9c0eaad
	Dec 27 20:29:19 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:19.066500198Z" level=info msg="Removing container: fc0c624c1d035c21e1d073440fc1a5e770275a7f859b82c16192bbab9eda985f" id=2f37defa-ebde-4a39-8395-34ae9705c944 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:19 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:19.076019353Z" level=info msg="Removed container fc0c624c1d035c21e1d073440fc1a5e770275a7f859b82c16192bbab9eda985f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper" id=2f37defa-ebde-4a39-8395-34ae9705c944 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.093557502Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5c989034-efbb-460a-8bba-59410a094837 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.094639628Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2aa4b88-a35d-40ac-b652-51c89d7cc97d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.095683795Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2a3963ed-352b-4dd9-a2b3-3e2f1a2f2a15 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.095894345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.101606368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.101795084Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0fbc20fbc36d0565592756b0b70819f8450ef85ca74a7b473113f815f62ca5a3/merged/etc/passwd: no such file or directory"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.101830957Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0fbc20fbc36d0565592756b0b70819f8450ef85ca74a7b473113f815f62ca5a3/merged/etc/group: no such file or directory"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.102145997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.129481472Z" level=info msg="Created container 2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901: kube-system/storage-provisioner/storage-provisioner" id=2a3963ed-352b-4dd9-a2b3-3e2f1a2f2a15 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.130276263Z" level=info msg="Starting container: 2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901" id=940408d3-4aac-48df-bd10-d163f1e5adf2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.132407806Z" level=info msg="Started container" PID=1783 containerID=2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901 description=kube-system/storage-provisioner/storage-provisioner id=940408d3-4aac-48df-bd10-d163f1e5adf2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7539d7bae1e5992956c4d41119c1f97a76e90ee5614a3cb15063b0390280b582
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.967974996Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb4f8dc3-8cfe-416f-bf9c-53976e562944 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.969036914Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=706cd668-a4b0-444b-b4c5-d87cbdcdaad3 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.970059416Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper" id=1bd92ff0-5c9c-4153-9055-efd7c7dacc7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.970197924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.975046288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.975463787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.00771465Z" level=info msg="Created container aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper" id=1bd92ff0-5c9c-4153-9055-efd7c7dacc7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.008264785Z" level=info msg="Starting container: aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee" id=45db7c93-dc34-4357-bc94-52f71805af59 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.00983478Z" level=info msg="Started container" PID=1822 containerID=aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper id=45db7c93-dc34-4357-bc94-52f71805af59 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbe597bbd1c4320be908e3a167ad815e83f13c8c3051e6604c8cd09fc9c0eaad
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.132840299Z" level=info msg="Removing container: 2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482" id=a5df3a13-7fc7-4d1a-a5f1-97fbe826d1d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.141833579Z" level=info msg="Removed container 2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper" id=a5df3a13-7fc7-4d1a-a5f1-97fbe826d1d0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	aeb3a9e4af58f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   fbe597bbd1c43       dashboard-metrics-scraper-867fb5f87b-spk97             kubernetes-dashboard
	2af3abf306111       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   7539d7bae1e59       storage-provisioner                                    kube-system
	ab53c9c42b91e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   4e827ed1c0981       kubernetes-dashboard-b84665fb8-nqh72                   kubernetes-dashboard
	547e235fbc652       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   6955da1e30a39       busybox                                                default
	a235d8e194a33       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           55 seconds ago      Running             coredns                     0                   f3b3a580f89ab       coredns-7d764666f9-gtzdb                               kube-system
	9534087ad19d0       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           55 seconds ago      Running             kindnet-cni                 0                   bf8695edbbef3       kindnet-c9zm9                                          kube-system
	90d96883673c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   7539d7bae1e59       storage-provisioner                                    kube-system
	a83814d26e9fe       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           55 seconds ago      Running             kube-proxy                  0                   f696a6e43a45d       kube-proxy-m5zcc                                       kube-system
	5931439a8c0a5       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           58 seconds ago      Running             kube-controller-manager     0                   11496f2d0c2d9       kube-controller-manager-default-k8s-diff-port-954154   kube-system
	706a22c5fabaf       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           58 seconds ago      Running             kube-scheduler              0                   a16e518fc8268       kube-scheduler-default-k8s-diff-port-954154            kube-system
	0959afbe1a995       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           58 seconds ago      Running             kube-apiserver              0                   45bac79303d48       kube-apiserver-default-k8s-diff-port-954154            kube-system
	8b0860861ddb0       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           58 seconds ago      Running             etcd                        0                   3dddbd76b9d06       etcd-default-k8s-diff-port-954154                      kube-system
	
	
	==> coredns [a235d8e194a333df66ad83d10b5899176171cd0d7c0c95256c8864cb76d3b1c2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41326 - 30416 "HINFO IN 6118200687848454748.8221138099289553947. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064664195s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-954154
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-954154
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=default-k8s-diff-port-954154
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_27_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:27:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-954154
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:29:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:29:27 +0000   Sat, 27 Dec 2025 20:27:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:29:27 +0000   Sat, 27 Dec 2025 20:27:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:29:27 +0000   Sat, 27 Dec 2025 20:27:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:29:27 +0000   Sat, 27 Dec 2025 20:28:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-954154
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                7ca85da6-448a-4be6-8ab2-a8891caf574d
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-gtzdb                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-954154                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-c9zm9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-954154             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-954154    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-m5zcc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-954154             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-spk97              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-nqh72                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node default-k8s-diff-port-954154 event: Registered Node default-k8s-diff-port-954154 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node default-k8s-diff-port-954154 event: Registered Node default-k8s-diff-port-954154 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [8b0860861ddb032e4078fd3575653ab5f5a77cc25e17d9ecb7910330acbef6e8] <==
	{"level":"info","ts":"2025-12-27T20:28:55.567311Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:28:55.567322Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:28:55.566799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-27T20:28:55.567434Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:28:55.567490Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:28:55.567773Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:28:55.567859Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:28:55.658145Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:55.658308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:55.658406Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:55.658448Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:55.658487Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:55.660407Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:55.660445Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:55.660468Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:55.660478Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:55.664528Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-954154 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:28:55.664567Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:55.664728Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:55.665070Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:55.665140Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:55.666063Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:55.667302Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:55.671698Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:28:55.671943Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 20:29:53 up  1:12,  0 user,  load average: 2.46, 2.99, 2.25
	Linux default-k8s-diff-port-954154 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9534087ad19d0cf1c6a64a0fc06e25e8871c31789bd13e3b1daa949f660f0cb3] <==
	I1227 20:28:58.505554       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:28:58.505822       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 20:28:58.506040       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:28:58.506067       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:28:58.506088       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:28:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:28:58.707751       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:28:58.707806       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:28:58.707820       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:28:58.898908       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:28:59.098886       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:28:59.098951       1 metrics.go:72] Registering metrics
	I1227 20:28:59.099046       1 controller.go:711] "Syncing nftables rules"
	I1227 20:29:08.708122       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:08.708193       1 main.go:301] handling current node
	I1227 20:29:18.707999       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:18.708056       1 main.go:301] handling current node
	I1227 20:29:28.708149       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:28.708188       1 main.go:301] handling current node
	I1227 20:29:38.707248       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:38.707285       1 main.go:301] handling current node
	I1227 20:29:48.707895       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:48.707945       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0959afbe1a9958167e4657feae2ee275c220e183182d01f3bfb3c248002f75ad] <==
	I1227 20:28:57.020153       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:28:57.020262       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:28:57.020382       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:28:57.020392       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:28:57.020399       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:28:57.020405       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:28:57.020579       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:28:57.020584       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:28:57.021688       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1227 20:28:57.025789       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:28:57.027846       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:28:57.034486       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:28:57.065289       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:28:57.087479       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:28:57.313536       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:28:57.340504       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:28:57.357551       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:28:57.364354       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:28:57.371899       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:28:57.404072       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.38.19"}
	I1227 20:28:57.413393       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.13.191"}
	I1227 20:28:57.923310       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:29:00.595848       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:29:00.696397       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:29:00.796671       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5931439a8c0a571b5456aa8ff89ec7efa4a328c297d4825276b5ce8da13b99a6] <==
	I1227 20:29:00.148052       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148235       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148338       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:29:00.148357       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:00.148363       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148373       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148388       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148493       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148784       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148823       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148983       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149009       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149136       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149180       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149250       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149675       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149794       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149799       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.150046       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.153950       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.157836       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:00.248085       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.248115       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:29:00.248121       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:29:00.259004       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [a83814d26e9fe52126c4d08033b6e1e1f2a478f9db8a48f3f69ebb4c0202e7d1] <==
	I1227 20:28:58.363864       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:28:58.422816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:58.523780       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:58.523820       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 20:28:58.523961       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:28:58.542584       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:28:58.542642       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:28:58.547654       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:28:58.548025       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:28:58.548042       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:58.549263       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:28:58.549332       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:28:58.549301       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:28:58.549416       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:28:58.549337       1 config.go:309] "Starting node config controller"
	I1227 20:28:58.549466       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:28:58.549489       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:28:58.549510       1 config.go:200] "Starting service config controller"
	I1227 20:28:58.549567       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:28:58.649904       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:28:58.649935       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:28:58.649958       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [706a22c5fabaf1ee5036f9f7e94b4c7a02acaeb2002009e6416b6682929512f2] <==
	I1227 20:28:55.812523       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:28:56.932206       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:28:56.932264       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:28:56.932276       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:28:56.932373       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:28:56.972847       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:28:56.972901       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:56.979652       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:28:56.979717       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:56.982521       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:28:56.982582       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:28:57.001716       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:28:57.004005       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:28:57.004362       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:28:57.004471       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:28:57.004709       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:28:57.006084       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1227 20:28:57.079958       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:29:17 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:17.796085     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:18 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:18.967839     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:18 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:18.967888     730 scope.go:122] "RemoveContainer" containerID="fc0c624c1d035c21e1d073440fc1a5e770275a7f859b82c16192bbab9eda985f"
	Dec 27 20:29:19 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:19.065295     730 scope.go:122] "RemoveContainer" containerID="fc0c624c1d035c21e1d073440fc1a5e770275a7f859b82c16192bbab9eda985f"
	Dec 27 20:29:19 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:19.065530     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:19 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:19.065562     730 scope.go:122] "RemoveContainer" containerID="2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482"
	Dec 27 20:29:19 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:19.065749     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:27 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:27.795550     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:27 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:27.795591     730 scope.go:122] "RemoveContainer" containerID="2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482"
	Dec 27 20:29:27 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:27.795750     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:29 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:29.093040     730 scope.go:122] "RemoveContainer" containerID="90d96883673c78a13b5329162bd2a7485dcb13c728e7539421116adef8f9b6c4"
	Dec 27 20:29:38 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:38.120530     730 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gtzdb" containerName="coredns"
	Dec 27 20:29:41 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:41.967370     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:41 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:41.967408     730 scope.go:122] "RemoveContainer" containerID="2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482"
	Dec 27 20:29:42 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:42.131565     730 scope.go:122] "RemoveContainer" containerID="2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482"
	Dec 27 20:29:42 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:42.131799     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:42 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:42.131830     730 scope.go:122] "RemoveContainer" containerID="aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee"
	Dec 27 20:29:42 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:42.132043     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:47 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:47.795096     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:47 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:47.795134     730 scope.go:122] "RemoveContainer" containerID="aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee"
	Dec 27 20:29:47 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:47.795295     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:51 default-k8s-diff-port-954154 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:29:51 default-k8s-diff-port-954154 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:29:51 default-k8s-diff-port-954154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:29:51 default-k8s-diff-port-954154 systemd[1]: kubelet.service: Consumed 1.766s CPU time.
	
	
	==> kubernetes-dashboard [ab53c9c42b91e3b26fe7869e87d99f9ffa94077f731d37a6fd683cc5012d55de] <==
	2025/12/27 20:29:04 Using namespace: kubernetes-dashboard
	2025/12/27 20:29:04 Using in-cluster config to connect to apiserver
	2025/12/27 20:29:04 Using secret token for csrf signing
	2025/12/27 20:29:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:29:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:29:04 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:29:04 Generating JWE encryption key
	2025/12/27 20:29:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:29:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:29:04 Initializing JWE encryption key from synchronized object
	2025/12/27 20:29:04 Creating in-cluster Sidecar client
	2025/12/27 20:29:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:29:04 Serving insecurely on HTTP port: 9090
	2025/12/27 20:29:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:29:04 Starting overwatch
	
	
	==> storage-provisioner [2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901] <==
	I1227 20:29:29.148449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:29:29.158113       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:29:29.158254       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:29:29.165478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:32.621186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:36.882135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:40.480859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:43.534292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:46.556518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:46.560500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:29:46.560627       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:29:46.560786       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-954154_81261543-4121-460f-ba19-6f2077900bf4!
	I1227 20:29:46.560785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6330280-d91e-46b9-b706-b20e6fbb3c3b", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-954154_81261543-4121-460f-ba19-6f2077900bf4 became leader
	W1227 20:29:46.562495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:46.566388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:29:46.661090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-954154_81261543-4121-460f-ba19-6f2077900bf4!
	W1227 20:29:48.569479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:48.572961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:50.575717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:50.579318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:52.582355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:52.587962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [90d96883673c78a13b5329162bd2a7485dcb13c728e7539421116adef8f9b6c4] <==
	I1227 20:28:58.334010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:29:28.336382       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154: exit status 2 (321.997738ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-954154 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-954154
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-954154:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987",
	        "Created": "2025-12-27T20:27:45.398813644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 340254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:28:47.989059913Z",
	            "FinishedAt": "2025-12-27T20:28:46.736762371Z"
	        },
	        "Image": "sha256:c444b5e11df9d6bc496256b0e11f4e11deb33a885211ded2c3a4667df55bcb6b",
	        "ResolvConfPath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/hostname",
	        "HostsPath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/hosts",
	        "LogPath": "/var/lib/docker/containers/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987/c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987-json.log",
	        "Name": "/default-k8s-diff-port-954154",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-954154:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-954154",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c38cf1a04b3b50e78185395c88764e088687423a81d7f074a8b0f01b542d6987",
	                "LowerDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f-init/diff:/var/lib/docker/overlay2/37a0694d5e09176fe02add5b16d603e44d65e3a985a3465775c60819ea782502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/addfd789be6ed936f248b11101a678e63407804f63584452c7d6d0a2d65aa48f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-954154",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-954154/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-954154",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-954154",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-954154",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9aa0c984b9f3af0dc980b243a73d65937f76bb177f16789189e7fd703e5173dd",
	            "SandboxKey": "/var/run/docker/netns/9aa0c984b9f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-954154": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb8ec9ff71cd755e87cbf3d8e42ebf773088a83f754b577a011fbcdb7983e0c",
	                    "EndpointID": "cece27508983d2d015abcfe8772b4fcd9ed787548fb1f99e115fa3e14523301c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "52:3d:92:ca:93:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-954154",
	                        "c38cf1a04b3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154: exit status 2 (313.671027ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-954154 logs -n 25
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ no-preload-014435 image list --format=json                                                                                                                                                                                                    │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p no-preload-014435 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-307728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p no-preload-014435                                                                                                                                                                                                                          │ no-preload-014435                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-588477 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p newest-cni-307728 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p test-preload-dl-gcs-588477                                                                                                                                                                                                                 │ test-preload-dl-gcs-588477        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-github-805734 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-github-805734                                                                                                                                                                                                              │ test-preload-dl-github-805734     │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-275955 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-275955                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-275955 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-307728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ embed-certs-820583 image list --format=json                                                                                                                                                                                                   │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p embed-certs-820583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p embed-certs-820583                                                                                                                                                                                                                         │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ newest-cni-307728 image list --format=json                                                                                                                                                                                                    │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p newest-cni-307728 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p embed-certs-820583                                                                                                                                                                                                                         │ embed-certs-820583                │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p newest-cni-307728                                                                                                                                                                                                                          │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p newest-cni-307728                                                                                                                                                                                                                          │ newest-cni-307728                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ image   │ default-k8s-diff-port-954154 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ pause   │ -p default-k8s-diff-port-954154 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-954154      │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:29:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:29:22.784538  349640 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:22.784794  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.784803  349640 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:22.784808  349640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:22.785052  349640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:29:22.785520  349640 out.go:368] Setting JSON to false
	I1227 20:29:22.786562  349640 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4312,"bootTime":1766863051,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:29:22.786612  349640 start.go:143] virtualization: kvm guest
	I1227 20:29:22.788250  349640 out.go:179] * [newest-cni-307728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:29:22.789332  349640 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:29:22.789351  349640 notify.go:221] Checking for updates...
	I1227 20:29:22.791442  349640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:29:22.792602  349640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:22.793592  349640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:29:22.794578  349640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:29:22.795545  349640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:29:22.796871  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:22.797487  349640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:29:22.820540  349640 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:29:22.820686  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.876976  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.867077037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.877116  349640 docker.go:319] overlay module found
	I1227 20:29:22.878722  349640 out.go:179] * Using the docker driver based on existing profile
	I1227 20:29:22.879763  349640 start.go:309] selected driver: docker
	I1227 20:29:22.879776  349640 start.go:928] validating driver "docker" against &{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.879862  349640 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:29:22.880423  349640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:29:22.933111  349640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:29:22.923700326 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:29:22.933397  349640 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:22.933437  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:22.933495  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:22.933527  349640 start.go:353] cluster config:
	{Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:22.935838  349640 out.go:179] * Starting "newest-cni-307728" primary control-plane node in "newest-cni-307728" cluster
	I1227 20:29:22.936870  349640 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:29:22.938035  349640 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:29:22.939178  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:22.939218  349640 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I1227 20:29:22.939230  349640 cache.go:65] Caching tarball of preloaded images
	I1227 20:29:22.939273  349640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:29:22.939310  349640 preload.go:251] Found /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1227 20:29:22.939321  349640 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:29:22.939415  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:22.958953  349640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:29:22.958973  349640 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:29:22.958989  349640 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:29:22.959021  349640 start.go:360] acquireMachinesLock for newest-cni-307728: {Name:mk68119d6288f8bd1ffbe980f508592c691efe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:29:22.959080  349640 start.go:364] duration metric: took 32.398µs to acquireMachinesLock for "newest-cni-307728"
	I1227 20:29:22.959096  349640 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:29:22.959101  349640 fix.go:54] fixHost starting: 
	I1227 20:29:22.959287  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:22.976170  349640 fix.go:112] recreateIfNeeded on newest-cni-307728: state=Stopped err=<nil>
	W1227 20:29:22.976196  349640 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 20:29:23.141106  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:25.640346  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:27.641661  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:22.977899  349640 out.go:252] * Restarting existing docker container for "newest-cni-307728" ...
	I1227 20:29:22.977965  349640 cli_runner.go:164] Run: docker start newest-cni-307728
	I1227 20:29:23.209602  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:23.228965  349640 kic.go:430] container "newest-cni-307728" state is running.
	I1227 20:29:23.229357  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:23.247657  349640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/config.json ...
	I1227 20:29:23.247952  349640 machine.go:94] provisionDockerMachine start ...
	I1227 20:29:23.248040  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:23.266559  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:23.266854  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:23.266871  349640 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:29:23.267586  349640 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40356->127.0.0.1:33133: read: connection reset by peer
	I1227 20:29:26.389693  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.389724  349640 ubuntu.go:182] provisioning hostname "newest-cni-307728"
	I1227 20:29:26.389772  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.407725  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.407964  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.407977  349640 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-307728 && echo "newest-cni-307728" | sudo tee /etc/hostname
	I1227 20:29:26.537069  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-307728
	
	I1227 20:29:26.537154  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.554605  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:26.554823  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:26.554839  349640 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307728/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:29:26.675284  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:29:26.675315  349640 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-10897/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-10897/.minikube}
	I1227 20:29:26.675364  349640 ubuntu.go:190] setting up certificates
	I1227 20:29:26.675387  349640 provision.go:84] configureAuth start
	I1227 20:29:26.675446  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:26.693637  349640 provision.go:143] copyHostCerts
	I1227 20:29:26.693688  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem, removing ...
	I1227 20:29:26.693704  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem
	I1227 20:29:26.693768  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/cert.pem (1123 bytes)
	I1227 20:29:26.693867  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem, removing ...
	I1227 20:29:26.693885  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem
	I1227 20:29:26.693934  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/key.pem (1675 bytes)
	I1227 20:29:26.694025  349640 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem, removing ...
	I1227 20:29:26.694034  349640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem
	I1227 20:29:26.694061  349640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-10897/.minikube/ca.pem (1078 bytes)
	I1227 20:29:26.694130  349640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem org=jenkins.newest-cni-307728 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-307728]
	I1227 20:29:26.867266  349640 provision.go:177] copyRemoteCerts
	I1227 20:29:26.867338  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:29:26.867388  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:26.885478  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:26.980147  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:29:26.999076  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:29:27.017075  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:29:27.035083  349640 provision.go:87] duration metric: took 359.672918ms to configureAuth
	I1227 20:29:27.035111  349640 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:29:27.035327  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:27.035447  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.052793  349640 main.go:144] libmachine: Using SSH client type: native
	I1227 20:29:27.053075  349640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 20:29:27.053104  349640 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:29:27.343702  349640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:29:27.343727  349640 machine.go:97] duration metric: took 4.095755604s to provisionDockerMachine
	I1227 20:29:27.343741  349640 start.go:293] postStartSetup for "newest-cni-307728" (driver="docker")
	I1227 20:29:27.343754  349640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:29:27.343815  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:29:27.343863  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.367256  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.461046  349640 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:29:27.464376  349640 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:29:27.464409  349640 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:29:27.464430  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/addons for local assets ...
	I1227 20:29:27.464483  349640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-10897/.minikube/files for local assets ...
	I1227 20:29:27.464567  349640 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem -> 144272.pem in /etc/ssl/certs
	I1227 20:29:27.464649  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:29:27.471953  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:29:27.488345  349640 start.go:296] duration metric: took 144.591413ms for postStartSetup
	I1227 20:29:27.488403  349640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:29:27.488434  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.506383  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.597986  349640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:29:27.602558  349640 fix.go:56] duration metric: took 4.64345174s for fixHost
	I1227 20:29:27.602585  349640 start.go:83] releasing machines lock for "newest-cni-307728", held for 4.643494258s
	I1227 20:29:27.602644  349640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307728
	I1227 20:29:27.623164  349640 ssh_runner.go:195] Run: cat /version.json
	I1227 20:29:27.623225  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.623311  349640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:29:27.623401  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:27.644318  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.644706  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:27.735874  349640 ssh_runner.go:195] Run: systemctl --version
	I1227 20:29:27.796779  349640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:29:27.836209  349640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:29:27.841396  349640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:29:27.841458  349640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:29:27.849842  349640 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:29:27.849864  349640 start.go:496] detecting cgroup driver to use...
	I1227 20:29:27.849891  349640 detect.go:190] detected "systemd" cgroup driver on host os
	I1227 20:29:27.850059  349640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:29:27.863872  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:29:27.876702  349640 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:29:27.876753  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:29:27.890649  349640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:29:27.903058  349640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:29:27.992790  349640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:29:28.078394  349640 docker.go:234] disabling docker service ...
	I1227 20:29:28.078471  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:29:28.093111  349640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:29:28.105866  349640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:29:28.195542  349640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:29:28.278015  349640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:29:28.291348  349640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:29:28.305334  349640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:29:28.305405  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.314550  349640 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:29:28.314619  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.324597  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.334691  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.346435  349640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:29:28.356445  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.366534  349640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.375089  349640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:29:28.384484  349640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:29:28.392136  349640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:29:28.399804  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:28.488345  349640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:29:28.627177  349640 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:29:28.627250  349640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:29:28.631981  349640 start.go:574] Will wait 60s for crictl version
	I1227 20:29:28.632034  349640 ssh_runner.go:195] Run: which crictl
	I1227 20:29:28.635757  349640 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:29:28.661999  349640 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:29:28.662074  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.692995  349640 ssh_runner.go:195] Run: crio --version
	I1227 20:29:28.727086  349640 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:29:28.728112  349640 cli_runner.go:164] Run: docker network inspect newest-cni-307728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:29:28.747478  349640 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1227 20:29:28.752558  349640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:29:28.764745  349640 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:29:28.765905  349640 kubeadm.go:884] updating cluster {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:29:28.766060  349640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:29:28.766106  349640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:29:28.806106  349640 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:29:28.806131  349640 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:29:28.806184  349640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:29:28.834446  349640 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:29:28.834465  349640 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:29:28.834473  349640 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0 crio true true} ...
	I1227 20:29:28.834603  349640 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-307728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:29:28.834686  349640 ssh_runner.go:195] Run: crio config
	I1227 20:29:28.888266  349640 cni.go:84] Creating CNI manager for ""
	I1227 20:29:28.888297  349640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:29:28.888314  349640 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:29:28.888343  349640 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-307728 NodeName:newest-cni-307728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:29:28.888514  349640 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-307728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:29:28.888582  349640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:29:28.896598  349640 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:29:28.896658  349640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:29:28.904048  349640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:29:28.916029  349640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:29:28.928184  349640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1227 20:29:28.940621  349640 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:29:28.944032  349640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:29:28.953826  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:29.049430  349640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:29:29.069168  349640 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728 for IP: 192.168.103.2
	I1227 20:29:29.069184  349640 certs.go:195] generating shared ca certs ...
	I1227 20:29:29.069197  349640 certs.go:227] acquiring lock for ca certs: {Name:mkf159d13acb73e6d0ac046e4aaef1dce56a185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.069335  349640 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key
	I1227 20:29:29.069415  349640 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key
	I1227 20:29:29.069430  349640 certs.go:257] generating profile certs ...
	I1227 20:29:29.069535  349640 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/client.key
	I1227 20:29:29.069615  349640 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key.f45295df
	I1227 20:29:29.069674  349640 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key
	I1227 20:29:29.069814  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem (1338 bytes)
	W1227 20:29:29.069857  349640 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427_empty.pem, impossibly tiny 0 bytes
	I1227 20:29:29.069870  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 20:29:29.069905  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:29:29.069966  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:29:29.070003  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/certs/key.pem (1675 bytes)
	I1227 20:29:29.070061  349640 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem (1708 bytes)
	I1227 20:29:29.070605  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:29:29.089009  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:29:29.112171  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:29:29.134212  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:29:29.158133  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:29:29.181988  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:29:29.200678  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:29:29.218007  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/newest-cni-307728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:29:29.235685  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/certs/14427.pem --> /usr/share/ca-certificates/14427.pem (1338 bytes)
	I1227 20:29:29.255652  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/ssl/certs/144272.pem --> /usr/share/ca-certificates/144272.pem (1708 bytes)
	I1227 20:29:29.274505  349640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:29:29.294020  349640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:29:29.308273  349640 ssh_runner.go:195] Run: openssl version
	I1227 20:29:29.314351  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.321706  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/14427.pem /etc/ssl/certs/14427.pem
	I1227 20:29:29.329192  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.332801  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 19:58 /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.332846  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14427.pem
	I1227 20:29:29.370829  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:29:29.378564  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.386204  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/144272.pem /etc/ssl/certs/144272.pem
	I1227 20:29:29.393976  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.397479  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 19:58 /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.397525  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144272.pem
	I1227 20:29:29.433631  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:29:29.440987  349640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.449024  349640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:29:29.457943  349640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.461620  349640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:55 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.461665  349640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:29:29.499185  349640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:29:29.506551  349640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:29:29.510965  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:29:29.551280  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:29:29.589754  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:29:29.641048  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:29:29.698248  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:29:29.757405  349640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:29:29.803735  349640 kubeadm.go:401] StartCluster: {Name:newest-cni-307728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-307728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:29:29.803836  349640 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:29:29.803901  349640 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:29:29.835928  349640 cri.go:96] found id: "2c126392630ac0e9cc664d34784d1f3d6649b49a4944fb5ff018e652894a5f6c"
	I1227 20:29:29.835952  349640 cri.go:96] found id: "2468df267da64ce7026d23c356156eccc868b96989c5479a2b5dfe05bcf8f0dc"
	I1227 20:29:29.835959  349640 cri.go:96] found id: "5ae0e51100b8fd95d4ea6bcc9d74ec6251f93fbcceeee0b2dd3cee32dada6c31"
	I1227 20:29:29.835967  349640 cri.go:96] found id: "413cb76a28516298082c7dbccd5e654b39bcb99ebcb7579b812852e885f9f50f"
	I1227 20:29:29.835971  349640 cri.go:96] found id: ""
	I1227 20:29:29.836012  349640 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:29:29.848165  349640 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:29:29Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:29:29.848217  349640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:29:29.857470  349640 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:29:29.857490  349640 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:29:29.857540  349640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:29:29.865790  349640 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:29:29.866736  349640 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-307728" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:29.867255  349640 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-10897/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-307728" cluster setting kubeconfig missing "newest-cni-307728" context setting]
	I1227 20:29:29.867965  349640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.869656  349640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:29:29.877605  349640 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1227 20:29:29.877634  349640 kubeadm.go:602] duration metric: took 20.137461ms to restartPrimaryControlPlane
	I1227 20:29:29.877651  349640 kubeadm.go:403] duration metric: took 73.916779ms to StartCluster
	I1227 20:29:29.877669  349640 settings.go:142] acquiring lock: {Name:mkf3077b12a3cef0d75d0d0642c2652aa74718f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.877726  349640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:29:29.879534  349640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-10897/kubeconfig: {Name:mk4ccafb996928672a8e78a62ce47d8add645009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:29:29.879773  349640 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:29:29.880023  349640 config.go:182] Loaded profile config "newest-cni-307728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:29.880084  349640 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:29:29.880164  349640 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-307728"
	I1227 20:29:29.880179  349640 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-307728"
	W1227 20:29:29.880192  349640 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:29:29.880216  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.880295  349640 addons.go:70] Setting dashboard=true in profile "newest-cni-307728"
	I1227 20:29:29.880319  349640 addons.go:70] Setting default-storageclass=true in profile "newest-cni-307728"
	I1227 20:29:29.880353  349640 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-307728"
	I1227 20:29:29.880324  349640 addons.go:239] Setting addon dashboard=true in "newest-cni-307728"
	W1227 20:29:29.880433  349640 addons.go:248] addon dashboard should already be in state true
	I1227 20:29:29.880462  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.880671  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.880672  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.880907  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.885068  349640 out.go:179] * Verifying Kubernetes components...
	I1227 20:29:29.888082  349640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:29:29.906427  349640 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:29:29.906423  349640 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:29:29.906727  349640 addons.go:239] Setting addon default-storageclass=true in "newest-cni-307728"
	W1227 20:29:29.906749  349640 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:29:29.906798  349640 host.go:66] Checking if "newest-cni-307728" exists ...
	I1227 20:29:29.907308  349640 cli_runner.go:164] Run: docker container inspect newest-cni-307728 --format={{.State.Status}}
	I1227 20:29:29.908502  349640 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:29.908563  349640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:29:29.908620  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.909594  349640 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:29:29.910726  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:29:29.910750  349640 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:29:29.910812  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.939150  349640 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:29.939175  349640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:29:29.939233  349640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307728
	I1227 20:29:29.940432  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:29.944922  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:29.977263  349640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/newest-cni-307728/id_rsa Username:docker}
	I1227 20:29:30.045064  349640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:29:30.058167  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:29:30.058191  349640 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:29:30.061437  349640 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:29:30.061487  349640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:29:30.073175  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:29:30.076392  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:29:30.076416  349640 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:29:30.079332  349640 api_server.go:72] duration metric: took 199.523544ms to wait for apiserver process to appear ...
	I1227 20:29:30.079356  349640 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:29:30.079373  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:30.090441  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:29:30.094269  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:29:30.094291  349640 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:29:30.114023  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:29:30.114046  349640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:29:30.131494  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:29:30.131515  349640 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:29:30.149541  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:29:30.149615  349640 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:29:30.167283  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:29:30.167310  349640 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:29:30.184004  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:29:30.184024  349640 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:29:30.201013  349640 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:29:30.201038  349640 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:29:30.217298  349640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:29:30.994230  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:30.994265  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:30.994280  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.078728  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:31.078755  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:31.079882  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.090296  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:29:31.090325  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:29:31.580397  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:31.585748  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:29:31.585801  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:29:31.606183  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.532971297s)
	I1227 20:29:31.606241  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.515771231s)
	I1227 20:29:31.606358  349640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.389020153s)
	I1227 20:29:31.607861  349640 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-307728 addons enable metrics-server
	
	I1227 20:29:31.616813  349640 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:29:31.617978  349640 addons.go:530] duration metric: took 1.737896941s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:29:32.080229  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:32.084464  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:29:32.084506  349640 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:29:32.580069  349640 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1227 20:29:32.584664  349640 api_server.go:325] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1227 20:29:32.585710  349640 api_server.go:141] control plane version: v1.35.0
	I1227 20:29:32.585733  349640 api_server.go:131] duration metric: took 2.506370541s to wait for apiserver health ...
	I1227 20:29:32.585741  349640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:29:32.588682  349640 system_pods.go:59] 8 kube-system pods found
	I1227 20:29:32.588707  349640 system_pods.go:61] "coredns-7d764666f9-v4xtw" [54b9ffbd-579b-483a-aa05-a65988e43aae] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:29:32.588718  349640 system_pods.go:61] "etcd-newest-cni-307728" [47c59b02-ea05-4deb-a2d5-f33fe18e738b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:29:32.588742  349640 system_pods.go:61] "kindnet-6z4tn" [93ba591e-f91b-4d17-bc19-0df196548fdd] Running
	I1227 20:29:32.588751  349640 system_pods.go:61] "kube-apiserver-newest-cni-307728" [ff05d4da-e496-4611-90a2-32a9e49a76a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:29:32.588759  349640 system_pods.go:61] "kube-controller-manager-newest-cni-307728" [98a6898f-bd6c-4bb5-97eb-767920c25375] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:29:32.588772  349640 system_pods.go:61] "kube-proxy-9qccb" [7af7999b-ede9-4da5-8e6f-df77472e1cdd] Running
	I1227 20:29:32.588778  349640 system_pods.go:61] "kube-scheduler-newest-cni-307728" [cac454d9-fa90-45da-b22c-5d0e23dc78a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:29:32.588785  349640 system_pods.go:61] "storage-provisioner" [b4c1fa65-07d5-4f68-a68b-43acd8569dd1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:29:32.588790  349640 system_pods.go:74] duration metric: took 3.044295ms to wait for pod list to return data ...
	I1227 20:29:32.588801  349640 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:29:32.591068  349640 default_sa.go:45] found service account: "default"
	I1227 20:29:32.591087  349640 default_sa.go:55] duration metric: took 2.281836ms for default service account to be created ...
	I1227 20:29:32.591100  349640 kubeadm.go:587] duration metric: took 2.711295065s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:29:32.591132  349640 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:29:32.592996  349640 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1227 20:29:32.593016  349640 node_conditions.go:123] node cpu capacity is 8
	I1227 20:29:32.593030  349640 node_conditions.go:105] duration metric: took 1.888982ms to run NodePressure ...
	I1227 20:29:32.593046  349640 start.go:242] waiting for startup goroutines ...
	I1227 20:29:32.593062  349640 start.go:247] waiting for cluster config update ...
	I1227 20:29:32.593076  349640 start.go:256] writing updated cluster config ...
	I1227 20:29:32.593351  349640 ssh_runner.go:195] Run: rm -f paused
	I1227 20:29:32.641222  349640 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	W1227 20:29:30.142107  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:32.640365  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:32.643768  349640 out.go:179] * Done! kubectl is now configured to use "newest-cni-307728" cluster and "default" namespace by default
	W1227 20:29:35.140008  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	W1227 20:29:37.140375  340025 pod_ready.go:104] pod "coredns-7d764666f9-gtzdb" is not "Ready", error: <nil>
	I1227 20:29:38.142882  340025 pod_ready.go:94] pod "coredns-7d764666f9-gtzdb" is "Ready"
	I1227 20:29:38.142921  340025 pod_ready.go:86] duration metric: took 39.50828616s for pod "coredns-7d764666f9-gtzdb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.154297  340025 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.162018  340025 pod_ready.go:94] pod "etcd-default-k8s-diff-port-954154" is "Ready"
	I1227 20:29:38.162051  340025 pod_ready.go:86] duration metric: took 7.724693ms for pod "etcd-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.252005  340025 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.256448  340025 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-954154" is "Ready"
	I1227 20:29:38.256469  340025 pod_ready.go:86] duration metric: took 4.441152ms for pod "kube-apiserver-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.258219  340025 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.339067  340025 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-954154" is "Ready"
	I1227 20:29:38.339093  340025 pod_ready.go:86] duration metric: took 80.855659ms for pod "kube-controller-manager-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.540066  340025 pod_ready.go:83] waiting for pod "kube-proxy-m5zcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:38.939011  340025 pod_ready.go:94] pod "kube-proxy-m5zcc" is "Ready"
	I1227 20:29:38.939036  340025 pod_ready.go:86] duration metric: took 398.941518ms for pod "kube-proxy-m5zcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:39.138935  340025 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:39.539151  340025 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-954154" is "Ready"
	I1227 20:29:39.539183  340025 pod_ready.go:86] duration metric: took 400.221359ms for pod "kube-scheduler-default-k8s-diff-port-954154" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:29:39.539199  340025 pod_ready.go:40] duration metric: took 40.907731262s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:29:39.585976  340025 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I1227 20:29:39.587469  340025 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-954154" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:29:19 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:19.023663042Z" level=info msg="Started container" PID=1768 containerID=2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper id=583207b1-7eaf-4995-be6b-91980d347912 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbe597bbd1c4320be908e3a167ad815e83f13c8c3051e6604c8cd09fc9c0eaad
	Dec 27 20:29:19 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:19.066500198Z" level=info msg="Removing container: fc0c624c1d035c21e1d073440fc1a5e770275a7f859b82c16192bbab9eda985f" id=2f37defa-ebde-4a39-8395-34ae9705c944 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:19 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:19.076019353Z" level=info msg="Removed container fc0c624c1d035c21e1d073440fc1a5e770275a7f859b82c16192bbab9eda985f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper" id=2f37defa-ebde-4a39-8395-34ae9705c944 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.093557502Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5c989034-efbb-460a-8bba-59410a094837 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.094639628Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2aa4b88-a35d-40ac-b652-51c89d7cc97d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.095683795Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2a3963ed-352b-4dd9-a2b3-3e2f1a2f2a15 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.095894345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.101606368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.101795084Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0fbc20fbc36d0565592756b0b70819f8450ef85ca74a7b473113f815f62ca5a3/merged/etc/passwd: no such file or directory"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.101830957Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0fbc20fbc36d0565592756b0b70819f8450ef85ca74a7b473113f815f62ca5a3/merged/etc/group: no such file or directory"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.102145997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.129481472Z" level=info msg="Created container 2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901: kube-system/storage-provisioner/storage-provisioner" id=2a3963ed-352b-4dd9-a2b3-3e2f1a2f2a15 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.130276263Z" level=info msg="Starting container: 2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901" id=940408d3-4aac-48df-bd10-d163f1e5adf2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:29 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:29.132407806Z" level=info msg="Started container" PID=1783 containerID=2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901 description=kube-system/storage-provisioner/storage-provisioner id=940408d3-4aac-48df-bd10-d163f1e5adf2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7539d7bae1e5992956c4d41119c1f97a76e90ee5614a3cb15063b0390280b582
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.967974996Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bb4f8dc3-8cfe-416f-bf9c-53976e562944 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.969036914Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=706cd668-a4b0-444b-b4c5-d87cbdcdaad3 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.970059416Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper" id=1bd92ff0-5c9c-4153-9055-efd7c7dacc7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.970197924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.975046288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:41 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:41.975463787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.00771465Z" level=info msg="Created container aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper" id=1bd92ff0-5c9c-4153-9055-efd7c7dacc7f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.008264785Z" level=info msg="Starting container: aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee" id=45db7c93-dc34-4357-bc94-52f71805af59 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.00983478Z" level=info msg="Started container" PID=1822 containerID=aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper id=45db7c93-dc34-4357-bc94-52f71805af59 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbe597bbd1c4320be908e3a167ad815e83f13c8c3051e6604c8cd09fc9c0eaad
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.132840299Z" level=info msg="Removing container: 2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482" id=a5df3a13-7fc7-4d1a-a5f1-97fbe826d1d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:29:42 default-k8s-diff-port-954154 crio[566]: time="2025-12-27T20:29:42.141833579Z" level=info msg="Removed container 2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97/dashboard-metrics-scraper" id=a5df3a13-7fc7-4d1a-a5f1-97fbe826d1d0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	aeb3a9e4af58f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   3                   fbe597bbd1c43       dashboard-metrics-scraper-867fb5f87b-spk97             kubernetes-dashboard
	2af3abf306111       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   7539d7bae1e59       storage-provisioner                                    kube-system
	ab53c9c42b91e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   4e827ed1c0981       kubernetes-dashboard-b84665fb8-nqh72                   kubernetes-dashboard
	547e235fbc652       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   6955da1e30a39       busybox                                                default
	a235d8e194a33       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           57 seconds ago       Running             coredns                     0                   f3b3a580f89ab       coredns-7d764666f9-gtzdb                               kube-system
	9534087ad19d0       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           57 seconds ago       Running             kindnet-cni                 0                   bf8695edbbef3       kindnet-c9zm9                                          kube-system
	90d96883673c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   7539d7bae1e59       storage-provisioner                                    kube-system
	a83814d26e9fe       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           57 seconds ago       Running             kube-proxy                  0                   f696a6e43a45d       kube-proxy-m5zcc                                       kube-system
	5931439a8c0a5       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           About a minute ago   Running             kube-controller-manager     0                   11496f2d0c2d9       kube-controller-manager-default-k8s-diff-port-954154   kube-system
	706a22c5fabaf       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           About a minute ago   Running             kube-scheduler              0                   a16e518fc8268       kube-scheduler-default-k8s-diff-port-954154            kube-system
	0959afbe1a995       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           About a minute ago   Running             kube-apiserver              0                   45bac79303d48       kube-apiserver-default-k8s-diff-port-954154            kube-system
	8b0860861ddb0       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           About a minute ago   Running             etcd                        0                   3dddbd76b9d06       etcd-default-k8s-diff-port-954154                      kube-system
	
	
	==> coredns [a235d8e194a333df66ad83d10b5899176171cd0d7c0c95256c8864cb76d3b1c2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41326 - 30416 "HINFO IN 6118200687848454748.8221138099289553947. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064664195s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-954154
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-954154
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=default-k8s-diff-port-954154
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_27_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:27:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-954154
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:29:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:29:27 +0000   Sat, 27 Dec 2025 20:27:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:29:27 +0000   Sat, 27 Dec 2025 20:27:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:29:27 +0000   Sat, 27 Dec 2025 20:27:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:29:27 +0000   Sat, 27 Dec 2025 20:28:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-954154
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a05ebbd9dbde47388fc365d694bbbfd
	  System UUID:                7ca85da6-448a-4be6-8ab2-a8891caf574d
	  Boot ID:                    e862e3ac-f8e7-431b-a536-9252fc29cb3f
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-gtzdb                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-954154                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-c9zm9                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-954154             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-954154    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-m5zcc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-954154             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-spk97              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-nqh72                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node default-k8s-diff-port-954154 event: Registered Node default-k8s-diff-port-954154 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node default-k8s-diff-port-954154 event: Registered Node default-k8s-diff-port-954154 in Controller
	
	
	==> dmesg <==
	[  +0.091803] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026212] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.286875] kauditd_printk_skb: 47 callbacks suppressed
	[Dec27 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[ +12.641300] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 46 2d 1c 8d 7d 08 06
	[  +0.000396] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e cb b0 88 25 d7 08 06
	[Dec27 20:26] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +22.750477] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 b6 be 1c 32 17 08 06
	[  +0.000388] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 11 8b 52 63 6c 08 06
	[ +11.650371] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	[Dec27 20:27] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 53 ca 84 73 c0 08 06
	[  +0.000337] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 74 f5 52 32 9a 08 06
	[ +17.275927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 80 6a 51 25 19 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ae cd 49 2a 1d c0 08 06
	
	
	==> etcd [8b0860861ddb032e4078fd3575653ab5f5a77cc25e17d9ecb7910330acbef6e8] <==
	{"level":"info","ts":"2025-12-27T20:28:55.567311Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:28:55.567322Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:28:55.566799Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-27T20:28:55.567434Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:28:55.567490Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:28:55.567773Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:28:55.567859Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:28:55.658145Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:55.658308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:55.658406Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T20:28:55.658448Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:55.658487Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:55.660407Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:55.660445Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:28:55.660468Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:55.660478Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T20:28:55.664528Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-954154 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:28:55.664567Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:55.664728Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:28:55.665070Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:55.665140Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:28:55.666063Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:55.667302Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:28:55.671698Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:28:55.671943Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 20:29:55 up  1:12,  0 user,  load average: 2.46, 2.99, 2.25
	Linux default-k8s-diff-port-954154 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9534087ad19d0cf1c6a64a0fc06e25e8871c31789bd13e3b1daa949f660f0cb3] <==
	I1227 20:28:58.505554       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:28:58.505822       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 20:28:58.506040       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:28:58.506067       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:28:58.506088       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:28:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:28:58.707751       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:28:58.707806       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:28:58.707820       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:28:58.898908       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:28:59.098886       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:28:59.098951       1 metrics.go:72] Registering metrics
	I1227 20:28:59.099046       1 controller.go:711] "Syncing nftables rules"
	I1227 20:29:08.708122       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:08.708193       1 main.go:301] handling current node
	I1227 20:29:18.707999       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:18.708056       1 main.go:301] handling current node
	I1227 20:29:28.708149       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:28.708188       1 main.go:301] handling current node
	I1227 20:29:38.707248       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:38.707285       1 main.go:301] handling current node
	I1227 20:29:48.707895       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 20:29:48.707945       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0959afbe1a9958167e4657feae2ee275c220e183182d01f3bfb3c248002f75ad] <==
	I1227 20:28:57.020153       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:28:57.020262       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:28:57.020382       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:28:57.020392       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:28:57.020399       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:28:57.020405       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:28:57.020579       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:28:57.020584       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:28:57.021688       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1227 20:28:57.025789       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:28:57.027846       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:28:57.034486       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:28:57.065289       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:28:57.087479       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:28:57.313536       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:28:57.340504       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:28:57.357551       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:28:57.364354       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:28:57.371899       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:28:57.404072       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.38.19"}
	I1227 20:28:57.413393       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.13.191"}
	I1227 20:28:57.923310       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:29:00.595848       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:29:00.696397       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:29:00.796671       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5931439a8c0a571b5456aa8ff89ec7efa4a328c297d4825276b5ce8da13b99a6] <==
	I1227 20:29:00.148052       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148235       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148338       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:29:00.148357       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:00.148363       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148373       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148388       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148493       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148784       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148823       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.148983       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149009       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149136       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149180       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149250       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149675       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149794       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.149799       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.150046       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.153950       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.157836       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:29:00.248085       1 shared_informer.go:377] "Caches are synced"
	I1227 20:29:00.248115       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:29:00.248121       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:29:00.259004       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [a83814d26e9fe52126c4d08033b6e1e1f2a478f9db8a48f3f69ebb4c0202e7d1] <==
	I1227 20:28:58.363864       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:28:58.422816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:58.523780       1 shared_informer.go:377] "Caches are synced"
	I1227 20:28:58.523820       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 20:28:58.523961       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:28:58.542584       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:28:58.542642       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:28:58.547654       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:28:58.548025       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:28:58.548042       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:58.549263       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:28:58.549332       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:28:58.549301       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:28:58.549416       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:28:58.549337       1 config.go:309] "Starting node config controller"
	I1227 20:28:58.549466       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:28:58.549489       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:28:58.549510       1 config.go:200] "Starting service config controller"
	I1227 20:28:58.549567       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:28:58.649904       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:28:58.649935       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:28:58.649958       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [706a22c5fabaf1ee5036f9f7e94b4c7a02acaeb2002009e6416b6682929512f2] <==
	I1227 20:28:55.812523       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:28:56.932206       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:28:56.932264       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:28:56.932276       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:28:56.932373       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:28:56.972847       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:28:56.972901       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:28:56.979652       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:28:56.979717       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:28:56.982521       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:28:56.982582       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:28:57.001716       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:28:57.004005       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:28:57.004362       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:28:57.004471       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:28:57.004709       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:28:57.006084       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1227 20:28:57.079958       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:29:17 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:17.796085     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:18 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:18.967839     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:18 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:18.967888     730 scope.go:122] "RemoveContainer" containerID="fc0c624c1d035c21e1d073440fc1a5e770275a7f859b82c16192bbab9eda985f"
	Dec 27 20:29:19 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:19.065295     730 scope.go:122] "RemoveContainer" containerID="fc0c624c1d035c21e1d073440fc1a5e770275a7f859b82c16192bbab9eda985f"
	Dec 27 20:29:19 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:19.065530     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:19 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:19.065562     730 scope.go:122] "RemoveContainer" containerID="2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482"
	Dec 27 20:29:19 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:19.065749     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:27 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:27.795550     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:27 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:27.795591     730 scope.go:122] "RemoveContainer" containerID="2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482"
	Dec 27 20:29:27 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:27.795750     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:29 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:29.093040     730 scope.go:122] "RemoveContainer" containerID="90d96883673c78a13b5329162bd2a7485dcb13c728e7539421116adef8f9b6c4"
	Dec 27 20:29:38 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:38.120530     730 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-gtzdb" containerName="coredns"
	Dec 27 20:29:41 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:41.967370     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:41 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:41.967408     730 scope.go:122] "RemoveContainer" containerID="2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482"
	Dec 27 20:29:42 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:42.131565     730 scope.go:122] "RemoveContainer" containerID="2315d2516654450c12a328473fb42443cc449f79307913236f7e55076a17a482"
	Dec 27 20:29:42 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:42.131799     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:42 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:42.131830     730 scope.go:122] "RemoveContainer" containerID="aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee"
	Dec 27 20:29:42 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:42.132043     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:47 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:47.795096     730 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" containerName="dashboard-metrics-scraper"
	Dec 27 20:29:47 default-k8s-diff-port-954154 kubelet[730]: I1227 20:29:47.795134     730 scope.go:122] "RemoveContainer" containerID="aeb3a9e4af58f86b907a9dc8c2dbf3b0d1d10b345599d18f8009993fd2521fee"
	Dec 27 20:29:47 default-k8s-diff-port-954154 kubelet[730]: E1227 20:29:47.795295     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-spk97_kubernetes-dashboard(5efd0be2-9a08-4d0d-9d81-a027dffb3bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-spk97" podUID="5efd0be2-9a08-4d0d-9d81-a027dffb3bd7"
	Dec 27 20:29:51 default-k8s-diff-port-954154 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:29:51 default-k8s-diff-port-954154 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:29:51 default-k8s-diff-port-954154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:29:51 default-k8s-diff-port-954154 systemd[1]: kubelet.service: Consumed 1.766s CPU time.
	
	
	==> kubernetes-dashboard [ab53c9c42b91e3b26fe7869e87d99f9ffa94077f731d37a6fd683cc5012d55de] <==
	2025/12/27 20:29:04 Starting overwatch
	2025/12/27 20:29:04 Using namespace: kubernetes-dashboard
	2025/12/27 20:29:04 Using in-cluster config to connect to apiserver
	2025/12/27 20:29:04 Using secret token for csrf signing
	2025/12/27 20:29:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:29:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:29:04 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:29:04 Generating JWE encryption key
	2025/12/27 20:29:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:29:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:29:04 Initializing JWE encryption key from synchronized object
	2025/12/27 20:29:04 Creating in-cluster Sidecar client
	2025/12/27 20:29:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:29:04 Serving insecurely on HTTP port: 9090
	2025/12/27 20:29:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2af3abf306111acd86d61aed1f801fe21ed75889c95284c3039cead7dfd97901] <==
	I1227 20:29:29.148449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:29:29.158113       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:29:29.158254       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:29:29.165478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:32.621186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:36.882135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:40.480859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:43.534292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:46.556518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:46.560500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:29:46.560627       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:29:46.560786       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-954154_81261543-4121-460f-ba19-6f2077900bf4!
	I1227 20:29:46.560785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6330280-d91e-46b9-b706-b20e6fbb3c3b", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-954154_81261543-4121-460f-ba19-6f2077900bf4 became leader
	W1227 20:29:46.562495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:46.566388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:29:46.661090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-954154_81261543-4121-460f-ba19-6f2077900bf4!
	W1227 20:29:48.569479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:48.572961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:50.575717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:50.579318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:52.582355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:52.587962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:54.591750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:29:54.596119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [90d96883673c78a13b5329162bd2a7485dcb13c728e7539421116adef8f9b6c4] <==
	I1227 20:28:58.334010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:29:28.336382       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154: exit status 2 (313.32686ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-954154 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.07s)

                                                
                                    

Test pass (279/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.17
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.35.0/json-events 2.71
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.78
22 TestOffline 58.13
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 92.23
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 7.4
48 TestAddons/StoppedEnableDisable 16.63
49 TestCertOptions 20.78
50 TestCertExpiration 207.63
52 TestForceSystemdFlag 18.69
53 TestForceSystemdEnv 37.36
58 TestErrorSpam/setup 19.4
59 TestErrorSpam/start 0.62
60 TestErrorSpam/status 0.93
61 TestErrorSpam/pause 6.18
62 TestErrorSpam/unpause 5.38
63 TestErrorSpam/stop 2.55
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.77
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.89
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.46
75 TestFunctional/serial/CacheCmd/cache/add_local 1.22
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.43
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 40.88
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.13
86 TestFunctional/serial/LogsFileCmd 1.14
87 TestFunctional/serial/InvalidService 3.96
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 6.81
91 TestFunctional/parallel/DryRun 0.43
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.12
97 TestFunctional/parallel/ServiceCmdConnect 14.69
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 28.16
101 TestFunctional/parallel/SSHCmd 0.53
102 TestFunctional/parallel/CpCmd 1.77
103 TestFunctional/parallel/MySQL 26.43
104 TestFunctional/parallel/FileSync 0.3
105 TestFunctional/parallel/CertSync 1.72
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
113 TestFunctional/parallel/License 0.46
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.18
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.46
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 2
122 TestFunctional/parallel/ImageCommands/Setup 2.68
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
124 TestFunctional/parallel/ProfileCmd/profile_list 0.39
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.34
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
130 TestFunctional/parallel/MountCmd/any-port 7.79
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.81
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.51
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.57
136 TestFunctional/parallel/ServiceCmd/List 0.5
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
142 TestFunctional/parallel/ServiceCmd/Format 0.36
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.21
146 TestFunctional/parallel/ServiceCmd/URL 0.39
147 TestFunctional/parallel/MountCmd/specific-port 1.84
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 104.01
163 TestMultiControlPlane/serial/DeployApp 5.02
164 TestMultiControlPlane/serial/PingHostFromPods 0.98
165 TestMultiControlPlane/serial/AddWorkerNode 24.9
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
168 TestMultiControlPlane/serial/CopyFile 16.18
169 TestMultiControlPlane/serial/StopSecondaryNode 14.21
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.65
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 101.56
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.47
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 43.42
177 TestMultiControlPlane/serial/RestartCluster 56.02
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 28.85
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
185 TestJSONOutput/start/Command 38.83
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.04
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 26.21
211 TestKicCustomNetwork/use_default_bridge_network 18.97
212 TestKicExistingNetwork 19.96
213 TestKicCustomSubnet 20.71
214 TestKicStaticIP 20.55
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 40.17
219 TestMountStart/serial/StartWithMountFirst 4.52
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 4.56
222 TestMountStart/serial/VerifyMountSecond 0.25
223 TestMountStart/serial/DeleteFirst 1.62
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.13
227 TestMountStart/serial/VerifyMountPostStop 0.25
230 TestMultiNode/serial/FreshStart2Nodes 62.38
231 TestMultiNode/serial/DeployApp2Nodes 2.69
232 TestMultiNode/serial/PingHostFrom2Pods 0.68
233 TestMultiNode/serial/AddNode 23.32
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.62
236 TestMultiNode/serial/CopyFile 9.15
237 TestMultiNode/serial/StopNode 2.21
238 TestMultiNode/serial/StartAfterStop 6.92
239 TestMultiNode/serial/RestartKeepsNodes 80.98
240 TestMultiNode/serial/DeleteNode 5.16
241 TestMultiNode/serial/StopMultiNode 28.59
242 TestMultiNode/serial/RestartMultiNode 27.75
243 TestMultiNode/serial/ValidateNameConflict 22.71
250 TestScheduledStopUnix 94.77
253 TestInsufficientStorage 8.44
254 TestRunningBinaryUpgrade 289.16
256 TestKubernetesUpgrade 328.11
257 TestMissingContainerUpgrade 70.45
259 TestPause/serial/Start 55.01
260 TestStoppedBinaryUpgrade/Setup 0.78
261 TestStoppedBinaryUpgrade/Upgrade 300.21
262 TestPause/serial/SecondStartNoReconfiguration 6.47
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
266 TestNoKubernetes/serial/StartWithK8s 19.4
267 TestNoKubernetes/serial/StartWithStopK8s 24.31
275 TestNetworkPlugins/group/false 4.11
276 TestNoKubernetes/serial/Start 4.27
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
282 TestNoKubernetes/serial/ProfileList 74.54
290 TestNoKubernetes/serial/Stop 1.24
291 TestNoKubernetes/serial/StartNoArgs 8.23
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
293 TestPreload/Start-NoPreload-PullImage 53.55
294 TestPreload/Restart-With-Preload-Check-User-Image 50.19
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
297 TestNetworkPlugins/group/auto/Start 37.63
298 TestNetworkPlugins/group/kindnet/Start 41.62
299 TestNetworkPlugins/group/auto/KubeletFlags 0.28
300 TestNetworkPlugins/group/auto/NetCatPod 11.2
301 TestNetworkPlugins/group/auto/DNS 0.11
302 TestNetworkPlugins/group/auto/Localhost 0.11
303 TestNetworkPlugins/group/auto/HairPin 0.1
304 TestNetworkPlugins/group/calico/Start 51.4
305 TestNetworkPlugins/group/custom-flannel/Start 40.71
306 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
307 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
308 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
309 TestNetworkPlugins/group/kindnet/DNS 0.14
310 TestNetworkPlugins/group/kindnet/Localhost 0.11
311 TestNetworkPlugins/group/kindnet/HairPin 0.12
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.19
314 TestNetworkPlugins/group/calico/ControllerPod 6.09
315 TestNetworkPlugins/group/enable-default-cni/Start 39.28
316 TestNetworkPlugins/group/calico/KubeletFlags 0.51
317 TestNetworkPlugins/group/calico/NetCatPod 9.56
318 TestNetworkPlugins/group/custom-flannel/DNS 0.16
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
321 TestNetworkPlugins/group/calico/DNS 0.13
322 TestNetworkPlugins/group/calico/Localhost 0.09
323 TestNetworkPlugins/group/calico/HairPin 0.1
324 TestNetworkPlugins/group/flannel/Start 46.29
325 TestNetworkPlugins/group/bridge/Start 60.89
326 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
327 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.24
328 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
329 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
330 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
332 TestStartStop/group/old-k8s-version/serial/FirstStart 48.83
334 TestStartStop/group/no-preload/serial/FirstStart 46.6
335 TestNetworkPlugins/group/flannel/ControllerPod 6.01
336 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
337 TestNetworkPlugins/group/flannel/NetCatPod 9.19
338 TestNetworkPlugins/group/flannel/DNS 0.11
339 TestNetworkPlugins/group/flannel/Localhost 0.1
340 TestNetworkPlugins/group/flannel/HairPin 0.09
341 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
342 TestNetworkPlugins/group/bridge/NetCatPod 9.21
343 TestStartStop/group/old-k8s-version/serial/DeployApp 8.29
344 TestNetworkPlugins/group/bridge/DNS 0.12
345 TestNetworkPlugins/group/bridge/Localhost 0.11
346 TestNetworkPlugins/group/bridge/HairPin 0.09
348 TestStartStop/group/old-k8s-version/serial/Stop 16.47
350 TestStartStop/group/embed-certs/serial/FirstStart 37.16
351 TestStartStop/group/no-preload/serial/DeployApp 9.22
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
354 TestStartStop/group/old-k8s-version/serial/SecondStart 47.12
356 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.01
357 TestStartStop/group/no-preload/serial/Stop 16.27
358 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
359 TestStartStop/group/no-preload/serial/SecondStart 51.51
360 TestStartStop/group/embed-certs/serial/DeployApp 7.24
362 TestStartStop/group/embed-certs/serial/Stop 16.62
363 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 6.24
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
367 TestStartStop/group/embed-certs/serial/SecondStart 45.69
368 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.39
369 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
370 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.35
375 TestStartStop/group/newest-cni/serial/FirstStart 21.42
376 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
377 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
378 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
380 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestPreload/PreloadSrc/gcs 3.99
383 TestStartStop/group/newest-cni/serial/Stop 10.26
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
385 TestPreload/PreloadSrc/github 5.61
386 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
387 TestPreload/PreloadSrc/gcs-cached 0.78
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
389 TestStartStop/group/newest-cni/serial/SecondStart 10.28
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
392 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
393 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
394 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
396 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
397 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
398 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (4.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-888117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-888117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.170291105s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 19:55:03.455648   14427 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1227 19:55:03.455711   14427 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-888117
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-888117: exit status 85 (69.345471ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-888117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-888117 │ jenkins │ v1.37.0 │ 27 Dec 25 19:54 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:54:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:54:59.337971   14439 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:54:59.338102   14439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:54:59.338111   14439 out.go:374] Setting ErrFile to fd 2...
	I1227 19:54:59.338116   14439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:54:59.338313   14439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	W1227 19:54:59.338452   14439 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22332-10897/.minikube/config/config.json: open /home/jenkins/minikube-integration/22332-10897/.minikube/config/config.json: no such file or directory
	I1227 19:54:59.339021   14439 out.go:368] Setting JSON to true
	I1227 19:54:59.339947   14439 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2248,"bootTime":1766863051,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 19:54:59.340005   14439 start.go:143] virtualization: kvm guest
	I1227 19:54:59.345158   14439 out.go:99] [download-only-888117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1227 19:54:59.345287   14439 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 19:54:59.345326   14439 notify.go:221] Checking for updates...
	I1227 19:54:59.346421   14439 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:54:59.347589   14439 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:54:59.349103   14439 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 19:54:59.350128   14439 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 19:54:59.351097   14439 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1227 19:54:59.353021   14439 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 19:54:59.353220   14439 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:54:59.376197   14439 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 19:54:59.376275   14439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:54:59.586744   14439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-27 19:54:59.577945045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 19:54:59.586843   14439 docker.go:319] overlay module found
	I1227 19:54:59.588339   14439 out.go:99] Using the docker driver based on user configuration
	I1227 19:54:59.588367   14439 start.go:309] selected driver: docker
	I1227 19:54:59.588372   14439 start.go:928] validating driver "docker" against <nil>
	I1227 19:54:59.588457   14439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:54:59.646102   14439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-27 19:54:59.636504272 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 19:54:59.646264   14439 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:54:59.646723   14439 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1227 19:54:59.646868   14439 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 19:54:59.648371   14439 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-888117 host does not exist
	  To start a cluster, run: "minikube start -p download-only-888117"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-888117
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (2.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-695376 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-695376 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.710168737s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (2.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 19:55:06.576211   14427 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I1227 19:55:06.576255   14427 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-695376
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-695376: exit status 85 (68.118251ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-888117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-888117 │ jenkins │ v1.37.0 │ 27 Dec 25 19:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-888117                                                                                                                                                   │ download-only-888117 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ -o=json --download-only -p download-only-695376 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-695376 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:03.914863   14798 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:55:03.915093   14798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:03.915103   14798 out.go:374] Setting ErrFile to fd 2...
	I1227 19:55:03.915109   14798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:03.915301   14798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 19:55:03.915751   14798 out.go:368] Setting JSON to true
	I1227 19:55:03.916496   14798 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2253,"bootTime":1766863051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 19:55:03.916547   14798 start.go:143] virtualization: kvm guest
	I1227 19:55:03.918116   14798 out.go:99] [download-only-695376] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 19:55:03.918260   14798 notify.go:221] Checking for updates...
	I1227 19:55:03.919273   14798 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:55:03.920299   14798 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:55:03.921344   14798 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 19:55:03.922403   14798 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 19:55:03.923303   14798 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1227 19:55:03.925243   14798 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 19:55:03.925433   14798 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:03.946964   14798 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 19:55:03.947074   14798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:03.998290   14798 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-27 19:55:03.989274491 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 19:55:03.998377   14798 docker.go:319] overlay module found
	I1227 19:55:03.999737   14798 out.go:99] Using the docker driver based on user configuration
	I1227 19:55:03.999762   14798 start.go:309] selected driver: docker
	I1227 19:55:03.999767   14798 start.go:928] validating driver "docker" against <nil>
	I1227 19:55:03.999833   14798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:04.053573   14798 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-27 19:55:04.04398699 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 19:55:04.053740   14798 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:04.054227   14798 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1227 19:55:04.054363   14798 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 19:55:04.055776   14798 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-695376 host does not exist
	  To start a cluster, run: "minikube start -p download-only-695376"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-695376
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-016221 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-016221" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-016221
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 19:55:07.630881   14427 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-353868 --alsologtostderr --binary-mirror http://127.0.0.1:46245 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-353868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-353868
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (58.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-240096 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-240096 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (55.601416879s)
helpers_test.go:176: Cleaning up "offline-crio-240096" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-240096
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-240096: (2.531583889s)
--- PASS: TestOffline (58.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-416077
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-416077: exit status 85 (59.693851ms)

                                                
                                                
-- stdout --
	* Profile "addons-416077" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-416077"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-416077
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-416077: exit status 85 (58.94865ms)

                                                
                                                
-- stdout --
	* Profile "addons-416077" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-416077"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (92.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-416077 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-416077 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m32.231518147s)
--- PASS: TestAddons/Setup (92.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-416077 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-416077 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-416077 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-416077 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [98ee8156-0eab-46e8-83c8-92bb16e99805] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [98ee8156-0eab-46e8-83c8-92bb16e99805] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003107859s
addons_test.go:696: (dbg) Run:  kubectl --context addons-416077 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-416077 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-416077 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.63s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-416077
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-416077: (16.364842204s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-416077
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-416077
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-416077
--- PASS: TestAddons/StoppedEnableDisable (16.63s)

                                                
                                    
x
+
TestCertOptions (20.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-386859 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-386859 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (17.778199359s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-386859 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-386859 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-386859 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-386859" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-386859
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-386859: (2.37697278s)
--- PASS: TestCertOptions (20.78s)

                                                
                                    
x
+
TestCertExpiration (207.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-002181 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-002181 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (19.479828957s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-002181 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-002181 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.68580124s)
helpers_test.go:176: Cleaning up "cert-expiration-002181" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-002181
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-002181: (2.468110743s)
--- PASS: TestCertExpiration (207.63s)

                                                
                                    
x
+
TestForceSystemdFlag (18.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-766151 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-766151 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.943689589s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-766151 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-766151" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-766151
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-766151: (2.474457725s)
--- PASS: TestForceSystemdFlag (18.69s)

                                                
                                    
x
+
TestForceSystemdEnv (37.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-287564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-287564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.768779059s)
helpers_test.go:176: Cleaning up "force-systemd-env-287564" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-287564
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-287564: (2.588672864s)
--- PASS: TestForceSystemdEnv (37.36s)

                                                
                                    
x
+
TestErrorSpam/setup (19.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-480558 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-480558 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-480558 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-480558 --driver=docker  --container-runtime=crio: (19.398925044s)
--- PASS: TestErrorSpam/setup (19.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (6.18s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 pause: exit status 80 (1.799067903s)

                                                
                                                
-- stdout --
	* Pausing node nospam-480558 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 pause: exit status 80 (2.224000469s)

                                                
                                                
-- stdout --
	* Pausing node nospam-480558 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 pause: exit status 80 (2.153700918s)

                                                
                                                
-- stdout --
	* Pausing node nospam-480558 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.18s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 unpause: exit status 80 (1.524196129s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-480558 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 unpause: exit status 80 (1.922552468s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-480558 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 unpause: exit status 80 (1.93743659s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-480558 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.38s)

                                                
                                    
x
+
TestErrorSpam/stop (2.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 stop: (2.356629921s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-480558 --log_dir /tmp/nospam-480558 stop
--- PASS: TestErrorSpam/stop (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22332-10897/.minikube/files/etc/test/nested/copy/14427/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-487501 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-487501 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.771991899s)
--- PASS: TestFunctional/serial/StartWithProxy (38.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 19:59:01.750313   14427 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-487501 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-487501 --alsologtostderr -v=8: (5.888115036s)
functional_test.go:678: soft start took 5.888929683s for "functional-487501" cluster.
I1227 19:59:07.638962   14427 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (5.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-487501 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-487501 /tmp/TestFunctionalserialCacheCmdcacheadd_local280002119/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 cache add minikube-local-cache-test:functional-487501
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 cache delete minikube-local-cache-test:functional-487501
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-487501
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (269.654183ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 kubectl -- --context functional-487501 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-487501 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-487501 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-487501 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.880499425s)
functional_test.go:776: restart took 40.880630398s for "functional-487501" cluster.
I1227 19:59:54.464497   14427 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (40.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-487501 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-487501 logs: (1.131731149s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 logs --file /tmp/TestFunctionalserialLogsFileCmd1575294403/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-487501 logs --file /tmp/TestFunctionalserialLogsFileCmd1575294403/001/logs.txt: (1.137247993s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-487501 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-487501
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-487501: exit status 115 (324.838499ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31127 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-487501 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 config get cpus: exit status 14 (84.685551ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 config get cpus: exit status 14 (69.470635ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-487501 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-487501 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 45196: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-487501 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-487501 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (188.48449ms)

                                                
                                                
-- stdout --
	* [functional-487501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:00:01.430408   44007 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:00:01.430515   44007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:00:01.430525   44007 out.go:374] Setting ErrFile to fd 2...
	I1227 20:00:01.430532   44007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:00:01.430828   44007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:00:01.431359   44007 out.go:368] Setting JSON to false
	I1227 20:00:01.432550   44007 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2550,"bootTime":1766863051,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:00:01.432632   44007 start.go:143] virtualization: kvm guest
	I1227 20:00:01.435616   44007 out.go:179] * [functional-487501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:00:01.438728   44007 notify.go:221] Checking for updates...
	I1227 20:00:01.438940   44007 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:00:01.440290   44007 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:00:01.441571   44007 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:00:01.442753   44007 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:00:01.443903   44007 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:00:01.448442   44007 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:00:01.450055   44007 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:00:01.450757   44007 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:00:01.478058   44007 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:00:01.478167   44007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:00:01.543052   44007 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-27 20:00:01.532105045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:00:01.543253   44007 docker.go:319] overlay module found
	I1227 20:00:01.545558   44007 out.go:179] * Using the docker driver based on existing profile
	I1227 20:00:01.546740   44007 start.go:309] selected driver: docker
	I1227 20:00:01.546769   44007 start.go:928] validating driver "docker" against &{Name:functional-487501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-487501 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:00:01.546870   44007 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:00:01.548906   44007 out.go:203] 
	W1227 20:00:01.549881   44007 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 20:00:01.550899   44007 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-487501 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-487501 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-487501 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.131675ms)

                                                
                                                
-- stdout --
	* [functional-487501] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:00:01.260716   43845 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:00:01.260949   43845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:00:01.260965   43845 out.go:374] Setting ErrFile to fd 2...
	I1227 20:00:01.260970   43845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:00:01.261288   43845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:00:01.261835   43845 out.go:368] Setting JSON to false
	I1227 20:00:01.262796   43845 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2550,"bootTime":1766863051,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:00:01.262859   43845 start.go:143] virtualization: kvm guest
	I1227 20:00:01.265684   43845 out.go:179] * [functional-487501] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1227 20:00:01.267178   43845 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:00:01.267236   43845 notify.go:221] Checking for updates...
	I1227 20:00:01.269427   43845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:00:01.270897   43845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:00:01.272025   43845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:00:01.273541   43845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:00:01.274735   43845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:00:01.276476   43845 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:00:01.277200   43845 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:00:01.302379   43845 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:00:01.302462   43845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:00:01.355614   43845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-27 20:00:01.346379482 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:00:01.355706   43845 docker.go:319] overlay module found
	I1227 20:00:01.357248   43845 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 20:00:01.358460   43845 start.go:309] selected driver: docker
	I1227 20:00:01.358476   43845 start.go:928] validating driver "docker" against &{Name:functional-487501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-487501 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:00:01.358556   43845 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:00:01.360233   43845 out.go:203] 
	W1227 20:00:01.361311   43845 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 20:00:01.362495   43845 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-487501 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-487501 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-w66g5" [82a06d7b-e06b-4dd9-82c0-49c8bbdd18a4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-w66g5" [82a06d7b-e06b-4dd9-82c0-49c8bbdd18a4] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.002929588s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30649
functional_test.go:1685: http://192.168.49.2:30649: success! body:
Request served by hello-node-connect-5d95464fd4-w66g5

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30649
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (14.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [c907d6e1-665c-4807-b2ea-932f915f25bc] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003507524s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-487501 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-487501 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-487501 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-487501 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1a5b8d39-f96a-43f3-9617-55b916a6704b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [1a5b8d39-f96a-43f3-9617-55b916a6704b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003076296s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-487501 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-487501 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-487501 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a0499fad-3e58-4e06-abe0-679d721e6b59] Pending
helpers_test.go:353: "sp-pod" [a0499fad-3e58-4e06-abe0-679d721e6b59] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003742346s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-487501 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh -n functional-487501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 cp functional-487501:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2029476718/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh -n functional-487501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh -n functional-487501 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-487501 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-ckncb" [f62c9544-fe12-4085-b48e-ae9aad30d458] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
I1227 20:00:14.302768   14427 detect.go:223] nested VM detected
helpers_test.go:353: "mysql-7d7b65bc95-ckncb" [f62c9544-fe12-4085-b48e-ae9aad30d458] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003336116s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-487501 exec mysql-7d7b65bc95-ckncb -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-487501 exec mysql-7d7b65bc95-ckncb -- mysql -ppassword -e "show databases;": exit status 1 (111.843129ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1227 20:00:30.353520   14427 retry.go:84] will retry after 600ms: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-487501 exec mysql-7d7b65bc95-ckncb -- mysql -ppassword -e "show databases;"
I1227 20:00:30.990326   14427 detect.go:223] nested VM detected
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-487501 exec mysql-7d7b65bc95-ckncb -- mysql -ppassword -e "show databases;": exit status 1 (97.955749ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1053 (08S01) at line 1: Server shutdown in progress
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-487501 exec mysql-7d7b65bc95-ckncb -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-487501 exec mysql-7d7b65bc95-ckncb -- mysql -ppassword -e "show databases;": exit status 1 (83.473803ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-487501 exec mysql-7d7b65bc95-ckncb -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-487501 exec mysql-7d7b65bc95-ckncb -- mysql -ppassword -e "show databases;": exit status 1 (88.932144ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-487501 exec mysql-7d7b65bc95-ckncb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/14427/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo cat /etc/test/nested/copy/14427/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/14427.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo cat /etc/ssl/certs/14427.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/14427.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo cat /usr/share/ca-certificates/14427.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/144272.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo cat /etc/ssl/certs/144272.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/144272.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo cat /usr/share/ca-certificates/144272.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-487501 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 ssh "sudo systemctl is-active docker": exit status 1 (288.174382ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 ssh "sudo systemctl is-active containerd": exit status 1 (296.209528ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-487501 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-487501 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-nw8b8" [31806214-0ea0-42f5-8f03-936787d4e097] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-nw8b8" [31806214-0ea0-42f5-8f03-936787d4e097] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003114653s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-487501 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-487501
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-487501 image ls --format short --alsologtostderr:
I1227 20:00:23.957148   52011 out.go:360] Setting OutFile to fd 1 ...
I1227 20:00:23.957407   52011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:23.957416   52011 out.go:374] Setting ErrFile to fd 2...
I1227 20:00:23.957419   52011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:23.957619   52011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
I1227 20:00:23.958188   52011 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:23.958295   52011 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:23.958726   52011 cli_runner.go:164] Run: docker container inspect functional-487501 --format={{.State.Status}}
I1227 20:00:23.980054   52011 ssh_runner.go:195] Run: systemctl --version
I1227 20:00:23.980103   52011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487501
I1227 20:00:24.001702   52011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/functional-487501/id_rsa Username:docker}
I1227 20:00:24.091261   52011 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-487501 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox                       │ latest                                │ beae173ccac6a │ 1.46MB │
│ localhost/minikube-local-cache-test               │ functional-487501                     │ 59b0f6d7748d3 │ 3.33kB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                                   │ 5e3dcc4ab5604 │ 804MB  │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ 5c6acd67e9cd1 │ 90.8MB │
│ registry.k8s.io/pause                             │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                             │ latest                                │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-487501                     │ 9056ab77afb8e │ 4.95MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ 9056ab77afb8e │ 4.95MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 2c9a4b058bd7e │ 76.9MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ localhost/my-image                                │ functional-487501                     │ bae3054fb47e2 │ 1.47MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ 32652ff1bbe6b │ 72MB   │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ 550794e3b12ac │ 52.8MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-487501 image ls --format table --alsologtostderr:
I1227 20:00:26.606116   52756 out.go:360] Setting OutFile to fd 1 ...
I1227 20:00:26.606232   52756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:26.606241   52756 out.go:374] Setting ErrFile to fd 2...
I1227 20:00:26.606245   52756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:26.606405   52756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
I1227 20:00:26.606952   52756 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:26.607052   52756 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:26.607448   52756 cli_runner.go:164] Run: docker container inspect functional-487501 --format={{.State.Status}}
I1227 20:00:26.625693   52756 ssh_runner.go:195] Run: systemctl --version
I1227 20:00:26.625736   52756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487501
I1227 20:00:26.642530   52756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/functional-487501/id_rsa Username:docker}
I1227 20:00:26.730217   52756 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-487501 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a"],"repoTags":["
registry.k8s.io/kube-apiserver:v1.35.0"],"size":"90844140"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":["registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"71986585"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha2
56:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4945146"},{"id":"59b0f6d7748d300e1ef67257571f82484d340d5cef4514eeeaec5443d9f3ab40","repoDigests":["localhost/minikube-local-cache-test@sha256:f317d4ba565cb08a4ea4b57e8a9f158c7bc3fd6d69c931d235532d96b3d87cb2"],"repoTags":["localhost/minikube-local-cache-test:functional-487501"],"size":"3330"},{"id":"bae3054fb47e2dabb9424c5d80bc0e88d3fcf0705a04cf62c3af077431e18f89","repoDigests":["localhost/my-image@sha256:3943c44f9263c57782e14f491fbe436d389f86a79a75c022a1611d83b82d5c04"],"repoTags":["localhost/my-image:functional-487501"],"size":"1468744"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":["registry.k8s.io/kube-controller-manager@sha256
:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111","registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"76893520"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc
2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"8c5f7ef0a978f29d046ef10d36da5854a667921d7af11bb20c21f0c584809a5a","repoDigests":["docker.io/library/9fde31a061eafc851411ced92ff9832e22f8cfb7fb13a2a62c2810cf369c1a65-tmp@sha256:5e57a6044e039ec1842054675a9840443cbf58ba613d487c61c64de1f2eb9c6c"],"repoTags":[],"size":"1466132"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repo
Tags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"
},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f2
4d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130","public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803760263"},{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"52763986"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-487501 image ls --format json --alsologtostderr:
I1227 20:00:26.391347   52700 out.go:360] Setting OutFile to fd 1 ...
I1227 20:00:26.391440   52700 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:26.391448   52700 out.go:374] Setting ErrFile to fd 2...
I1227 20:00:26.391452   52700 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:26.391666   52700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
I1227 20:00:26.392245   52700 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:26.392343   52700 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:26.392765   52700 cli_runner.go:164] Run: docker container inspect functional-487501 --format={{.State.Status}}
I1227 20:00:26.410143   52700 ssh_runner.go:195] Run: systemctl --version
I1227 20:00:26.410184   52700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487501
I1227 20:00:26.426620   52700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/functional-487501/id_rsa Username:docker}
I1227 20:00:26.516638   52700 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-487501 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5e3dcc4ab5604ab9bdf1054833d4f0ac396465de830ccac42d4f59131db9ba23
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:1f5b0aca09cfa06d9a7b89b28d349c1e01ba0d31339a4786fbcb3d5927070130
- public.ecr.aws/docker/library/mysql@sha256:eaf64e87ae0d1136d46405ad56c9010de509fd5b949b9c8ede45c94f47804d21
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803760263"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "71986585"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 59b0f6d7748d300e1ef67257571f82484d340d5cef4514eeeaec5443d9f3ab40
repoDigests:
- localhost/minikube-local-cache-test@sha256:f317d4ba565cb08a4ea4b57e8a9f158c7bc3fd6d69c931d235532d96b3d87cb2
repoTags:
- localhost/minikube-local-cache-test:functional-487501
size: "3330"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "90844140"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
- registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "76893520"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4945146"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "52763986"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-487501 image ls --format yaml --alsologtostderr:
I1227 20:00:24.184308   52090 out.go:360] Setting OutFile to fd 1 ...
I1227 20:00:24.184422   52090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:24.184432   52090 out.go:374] Setting ErrFile to fd 2...
I1227 20:00:24.184436   52090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:24.184619   52090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
I1227 20:00:24.185169   52090 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:24.185259   52090 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:24.185654   52090 cli_runner.go:164] Run: docker container inspect functional-487501 --format={{.State.Status}}
I1227 20:00:24.202926   52090 ssh_runner.go:195] Run: systemctl --version
I1227 20:00:24.202975   52090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487501
I1227 20:00:24.220131   52090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/functional-487501/id_rsa Username:docker}
I1227 20:00:24.310014   52090 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 ssh pgrep buildkitd: exit status 1 (262.77743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image build -t localhost/my-image:functional-487501 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-487501 image build -t localhost/my-image:functional-487501 testdata/build --alsologtostderr: (1.51391937s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-487501 image build -t localhost/my-image:functional-487501 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8c5f7ef0a97
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-487501
--> bae3054fb47
Successfully tagged localhost/my-image:functional-487501
bae3054fb47e2dabb9424c5d80bc0e88d3fcf0705a04cf62c3af077431e18f89
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-487501 image build -t localhost/my-image:functional-487501 testdata/build --alsologtostderr:
I1227 20:00:24.659447   52256 out.go:360] Setting OutFile to fd 1 ...
I1227 20:00:24.659704   52256 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:24.659713   52256 out.go:374] Setting ErrFile to fd 2...
I1227 20:00:24.659718   52256 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:00:24.659894   52256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
I1227 20:00:24.660432   52256 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:24.661046   52256 config.go:182] Loaded profile config "functional-487501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:00:24.661491   52256 cli_runner.go:164] Run: docker container inspect functional-487501 --format={{.State.Status}}
I1227 20:00:24.679801   52256 ssh_runner.go:195] Run: systemctl --version
I1227 20:00:24.679858   52256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-487501
I1227 20:00:24.698570   52256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/functional-487501/id_rsa Username:docker}
I1227 20:00:24.786127   52256 build_images.go:162] Building image from path: /tmp/build.2292900808.tar
I1227 20:00:24.786190   52256 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 20:00:24.793804   52256 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2292900808.tar
I1227 20:00:24.797244   52256 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2292900808.tar: stat -c "%s %y" /var/lib/minikube/build/build.2292900808.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2292900808.tar': No such file or directory
I1227 20:00:24.797280   52256 ssh_runner.go:362] scp /tmp/build.2292900808.tar --> /var/lib/minikube/build/build.2292900808.tar (3072 bytes)
I1227 20:00:24.813942   52256 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2292900808
I1227 20:00:24.821068   52256 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2292900808 -xf /var/lib/minikube/build/build.2292900808.tar
I1227 20:00:24.828427   52256 crio.go:315] Building image: /var/lib/minikube/build/build.2292900808
I1227 20:00:24.828483   52256 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-487501 /var/lib/minikube/build/build.2292900808 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1227 20:00:26.098199   52256 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-487501 /var/lib/minikube/build/build.2292900808 --cgroup-manager=cgroupfs: (1.269686237s)
I1227 20:00:26.098269   52256 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2292900808
I1227 20:00:26.106482   52256 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2292900808.tar
I1227 20:00:26.113732   52256 build_images.go:218] Built localhost/my-image:functional-487501 from /tmp/build.2292900808.tar
I1227 20:00:26.113759   52256 build_images.go:134] succeeded building to: functional-487501
I1227 20:00:26.113763   52256 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0: (2.654072649s)
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "321.296701ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "63.644536ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "343.49228ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "57.701286ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-487501 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501 --alsologtostderr: (1.080684238s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-487501 image ls: (1.263721784s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdany-port3363443896/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766865605894251207" to /tmp/TestFunctionalparallelMountCmdany-port3363443896/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766865605894251207" to /tmp/TestFunctionalparallelMountCmdany-port3363443896/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766865605894251207" to /tmp/TestFunctionalparallelMountCmdany-port3363443896/001/test-1766865605894251207
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (333.110511ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 20:00:06.227706   14427 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 20:00 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 20:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 20:00 test-1766865605894251207
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh cat /mount-9p/test-1766865605894251207
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-487501 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c828dd81-0f43-4b76-97b4-bb0c6ce6d621] Pending
helpers_test.go:353: "busybox-mount" [c828dd81-0f43-4b76-97b4-bb0c6ce6d621] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [c828dd81-0f43-4b76-97b4-bb0c6ce6d621] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [c828dd81-0f43-4b76-97b4-bb0c6ce6d621] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.002681983s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-487501 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdany-port3363443896/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
2025/12/27 20:00:08 [DEBUG] GET http://127.0.0.1:40877/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 service list -o json
functional_test.go:1509: Took "519.113705ms" to run "out/minikube-linux-amd64 -p functional-487501 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30202
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-487501 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-487501 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-487501 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 48615: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-487501 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-487501 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-487501 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [ae2931ab-6cf3-4262-b2ae-87540fa7084a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [ae2931ab-6cf3-4262-b2ae-87540fa7084a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003559858s
I1227 20:00:22.764166   14427 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30202
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdspecific-port3642261665/001:/mount-9p --alsologtostderr -v=1 --port 34829]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.12466ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 20:00:13.957746   14427 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdspecific-port3642261665/001:/mount-9p --alsologtostderr -v=1 --port 34829] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 ssh "sudo umount -f /mount-9p": exit status 1 (255.572597ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-487501 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdspecific-port3642261665/001:/mount-9p --alsologtostderr -v=1 --port 34829] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2526166990/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2526166990/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2526166990/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T" /mount1: exit status 1 (325.888374ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-487501 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-487501 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2526166990/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2526166990/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-487501 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2526166990/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-487501 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.63.228 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-487501 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-487501
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-487501
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-487501
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (104.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1227 20:01:41.517026   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:41.522321   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:41.532571   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:41.552844   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:41.593125   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:41.673447   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:41.833887   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:42.154469   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:42.795591   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:44.076067   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:46.637663   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:01:51.757997   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:01.998321   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:22.478680   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m43.288704405s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (104.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 kubectl -- rollout status deployment/busybox: (3.139021276s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-rtcvt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-wknbt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-xjrg8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-rtcvt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-wknbt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-xjrg8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-rtcvt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-wknbt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-xjrg8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-rtcvt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-rtcvt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-wknbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-wknbt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-xjrg8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 kubectl -- exec busybox-769dd8b7dd-xjrg8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 node add --alsologtostderr -v 5: (24.056846326s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-176916 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp testdata/cp-test.txt ha-176916:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile506876416/001/cp-test_ha-176916.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916:/home/docker/cp-test.txt ha-176916-m02:/home/docker/cp-test_ha-176916_ha-176916-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m02 "sudo cat /home/docker/cp-test_ha-176916_ha-176916-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916:/home/docker/cp-test.txt ha-176916-m03:/home/docker/cp-test_ha-176916_ha-176916-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m03 "sudo cat /home/docker/cp-test_ha-176916_ha-176916-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916:/home/docker/cp-test.txt ha-176916-m04:/home/docker/cp-test_ha-176916_ha-176916-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m04 "sudo cat /home/docker/cp-test_ha-176916_ha-176916-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp testdata/cp-test.txt ha-176916-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m02 "sudo cat /home/docker/cp-test.txt"
E1227 20:03:03.439156   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile506876416/001/cp-test_ha-176916-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m02:/home/docker/cp-test.txt ha-176916:/home/docker/cp-test_ha-176916-m02_ha-176916.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916 "sudo cat /home/docker/cp-test_ha-176916-m02_ha-176916.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m02:/home/docker/cp-test.txt ha-176916-m03:/home/docker/cp-test_ha-176916-m02_ha-176916-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m03 "sudo cat /home/docker/cp-test_ha-176916-m02_ha-176916-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m02:/home/docker/cp-test.txt ha-176916-m04:/home/docker/cp-test_ha-176916-m02_ha-176916-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m04 "sudo cat /home/docker/cp-test_ha-176916-m02_ha-176916-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp testdata/cp-test.txt ha-176916-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile506876416/001/cp-test_ha-176916-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m03:/home/docker/cp-test.txt ha-176916:/home/docker/cp-test_ha-176916-m03_ha-176916.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916 "sudo cat /home/docker/cp-test_ha-176916-m03_ha-176916.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m03:/home/docker/cp-test.txt ha-176916-m02:/home/docker/cp-test_ha-176916-m03_ha-176916-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m02 "sudo cat /home/docker/cp-test_ha-176916-m03_ha-176916-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m03:/home/docker/cp-test.txt ha-176916-m04:/home/docker/cp-test_ha-176916-m03_ha-176916-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m04 "sudo cat /home/docker/cp-test_ha-176916-m03_ha-176916-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp testdata/cp-test.txt ha-176916-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile506876416/001/cp-test_ha-176916-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m04:/home/docker/cp-test.txt ha-176916:/home/docker/cp-test_ha-176916-m04_ha-176916.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916 "sudo cat /home/docker/cp-test_ha-176916-m04_ha-176916.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m04:/home/docker/cp-test.txt ha-176916-m02:/home/docker/cp-test_ha-176916-m04_ha-176916-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m02 "sudo cat /home/docker/cp-test_ha-176916-m04_ha-176916-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 cp ha-176916-m04:/home/docker/cp-test.txt ha-176916-m03:/home/docker/cp-test_ha-176916-m04_ha-176916-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 ssh -n ha-176916-m03 "sudo cat /home/docker/cp-test_ha-176916-m04_ha-176916-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 node stop m02 --alsologtostderr -v 5: (13.544316684s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5: exit status 7 (669.671813ms)

                                                
                                                
-- stdout --
	ha-176916
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-176916-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-176916-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-176916-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:03:28.190811   72999 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:03:28.190908   72999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:03:28.190934   72999 out.go:374] Setting ErrFile to fd 2...
	I1227 20:03:28.190952   72999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:03:28.191138   72999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:03:28.191351   72999 out.go:368] Setting JSON to false
	I1227 20:03:28.191382   72999 mustload.go:66] Loading cluster: ha-176916
	I1227 20:03:28.191424   72999 notify.go:221] Checking for updates...
	I1227 20:03:28.191806   72999 config.go:182] Loaded profile config "ha-176916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:03:28.191825   72999 status.go:174] checking status of ha-176916 ...
	I1227 20:03:28.192242   72999 cli_runner.go:164] Run: docker container inspect ha-176916 --format={{.State.Status}}
	I1227 20:03:28.210838   72999 status.go:371] ha-176916 host status = "Running" (err=<nil>)
	I1227 20:03:28.210860   72999 host.go:66] Checking if "ha-176916" exists ...
	I1227 20:03:28.211186   72999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-176916
	I1227 20:03:28.230648   72999 host.go:66] Checking if "ha-176916" exists ...
	I1227 20:03:28.230888   72999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:03:28.230962   72999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-176916
	I1227 20:03:28.247320   72999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/ha-176916/id_rsa Username:docker}
	I1227 20:03:28.334824   72999 ssh_runner.go:195] Run: systemctl --version
	I1227 20:03:28.340935   72999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:03:28.352208   72999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:03:28.407212   72999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-27 20:03:28.397569081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:03:28.407971   72999 kubeconfig.go:125] found "ha-176916" server: "https://192.168.49.254:8443"
	I1227 20:03:28.408011   72999 api_server.go:166] Checking apiserver status ...
	I1227 20:03:28.408057   72999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:03:28.419228   72999 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	I1227 20:03:28.427698   72999 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1234/cgroup
	I1227 20:03:28.434790   72999 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-aab39e2431767009be5fd6d458647f5e07064755550b829e91f7603e229fa03d.scope/container/cgroup.freeze
	I1227 20:03:28.441571   72999 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 20:03:28.446908   72999 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 20:03:28.446941   72999 status.go:463] ha-176916 apiserver status = Running (err=<nil>)
	I1227 20:03:28.446952   72999 status.go:176] ha-176916 status: &{Name:ha-176916 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:03:28.446967   72999 status.go:174] checking status of ha-176916-m02 ...
	I1227 20:03:28.447201   72999 cli_runner.go:164] Run: docker container inspect ha-176916-m02 --format={{.State.Status}}
	I1227 20:03:28.464313   72999 status.go:371] ha-176916-m02 host status = "Stopped" (err=<nil>)
	I1227 20:03:28.464330   72999 status.go:384] host is not running, skipping remaining checks
	I1227 20:03:28.464335   72999 status.go:176] ha-176916-m02 status: &{Name:ha-176916-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:03:28.464351   72999 status.go:174] checking status of ha-176916-m03 ...
	I1227 20:03:28.464657   72999 cli_runner.go:164] Run: docker container inspect ha-176916-m03 --format={{.State.Status}}
	I1227 20:03:28.481334   72999 status.go:371] ha-176916-m03 host status = "Running" (err=<nil>)
	I1227 20:03:28.481351   72999 host.go:66] Checking if "ha-176916-m03" exists ...
	I1227 20:03:28.481578   72999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-176916-m03
	I1227 20:03:28.498292   72999 host.go:66] Checking if "ha-176916-m03" exists ...
	I1227 20:03:28.498565   72999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:03:28.498603   72999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-176916-m03
	I1227 20:03:28.515288   72999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/ha-176916-m03/id_rsa Username:docker}
	I1227 20:03:28.603791   72999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:03:28.615781   72999 kubeconfig.go:125] found "ha-176916" server: "https://192.168.49.254:8443"
	I1227 20:03:28.615804   72999 api_server.go:166] Checking apiserver status ...
	I1227 20:03:28.615832   72999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:03:28.626052   72999 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	I1227 20:03:28.633628   72999 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1162/cgroup
	I1227 20:03:28.640991   72999 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-75888fd0ced62f9e69317c9e9372f1cb526a9981eefb23412d004d4230e03537.scope/container/cgroup.freeze
	I1227 20:03:28.647871   72999 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 20:03:28.651782   72999 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 20:03:28.651807   72999 status.go:463] ha-176916-m03 apiserver status = Running (err=<nil>)
	I1227 20:03:28.651817   72999 status.go:176] ha-176916-m03 status: &{Name:ha-176916-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:03:28.651836   72999 status.go:174] checking status of ha-176916-m04 ...
	I1227 20:03:28.652075   72999 cli_runner.go:164] Run: docker container inspect ha-176916-m04 --format={{.State.Status}}
	I1227 20:03:28.669634   72999 status.go:371] ha-176916-m04 host status = "Running" (err=<nil>)
	I1227 20:03:28.669652   72999 host.go:66] Checking if "ha-176916-m04" exists ...
	I1227 20:03:28.669931   72999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-176916-m04
	I1227 20:03:28.685751   72999 host.go:66] Checking if "ha-176916-m04" exists ...
	I1227 20:03:28.685999   72999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:03:28.686032   72999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-176916-m04
	I1227 20:03:28.702616   72999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/ha-176916-m04/id_rsa Username:docker}
	I1227 20:03:28.790107   72999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:03:28.801849   72999 status.go:176] ha-176916-m04 status: &{Name:ha-176916-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 node start m02 --alsologtostderr -v 5: (7.696703778s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (101.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 stop --alsologtostderr -v 5: (44.65815055s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 start --wait true --alsologtostderr -v 5
E1227 20:04:25.360416   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:00.935874   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:00.941187   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:00.951425   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:00.971657   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:01.011868   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:01.092232   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:01.252695   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:01.573395   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:02.214319   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:03.495086   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:06.055982   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:05:11.176563   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 start --wait true --alsologtostderr -v 5: (56.780613924s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (101.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 node delete m03 --alsologtostderr -v 5
E1227 20:05:21.417275   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 node delete m03 --alsologtostderr -v 5: (9.678174387s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 stop --alsologtostderr -v 5
E1227 20:05:41.898334   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 stop --alsologtostderr -v 5: (43.309286305s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5: exit status 7 (111.032404ms)

                                                
                                                
-- stdout --
	ha-176916
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-176916-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-176916-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:06:15.087191   87241 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:06:15.087428   87241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:06:15.087436   87241 out.go:374] Setting ErrFile to fd 2...
	I1227 20:06:15.087440   87241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:06:15.087635   87241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:06:15.087788   87241 out.go:368] Setting JSON to false
	I1227 20:06:15.087814   87241 mustload.go:66] Loading cluster: ha-176916
	I1227 20:06:15.087892   87241 notify.go:221] Checking for updates...
	I1227 20:06:15.088305   87241 config.go:182] Loaded profile config "ha-176916": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:06:15.088335   87241 status.go:174] checking status of ha-176916 ...
	I1227 20:06:15.088882   87241 cli_runner.go:164] Run: docker container inspect ha-176916 --format={{.State.Status}}
	I1227 20:06:15.106636   87241 status.go:371] ha-176916 host status = "Stopped" (err=<nil>)
	I1227 20:06:15.106657   87241 status.go:384] host is not running, skipping remaining checks
	I1227 20:06:15.106665   87241 status.go:176] ha-176916 status: &{Name:ha-176916 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:06:15.106700   87241 status.go:174] checking status of ha-176916-m02 ...
	I1227 20:06:15.107023   87241 cli_runner.go:164] Run: docker container inspect ha-176916-m02 --format={{.State.Status}}
	I1227 20:06:15.126678   87241 status.go:371] ha-176916-m02 host status = "Stopped" (err=<nil>)
	I1227 20:06:15.126698   87241 status.go:384] host is not running, skipping remaining checks
	I1227 20:06:15.126703   87241 status.go:176] ha-176916-m02 status: &{Name:ha-176916-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:06:15.126719   87241 status.go:174] checking status of ha-176916-m04 ...
	I1227 20:06:15.126993   87241 cli_runner.go:164] Run: docker container inspect ha-176916-m04 --format={{.State.Status}}
	I1227 20:06:15.143020   87241 status.go:371] ha-176916-m04 host status = "Stopped" (err=<nil>)
	I1227 20:06:15.143062   87241 status.go:384] host is not running, skipping remaining checks
	I1227 20:06:15.143076   87241 status.go:176] ha-176916-m04 status: &{Name:ha-176916-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1227 20:06:22.859127   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:06:41.517780   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:09.201318   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.246080695s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (28.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-176916 node add --control-plane --alsologtostderr -v 5: (27.946177351s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-176916 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (28.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-638726 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-638726 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.83123494s)
--- PASS: TestJSONOutput/start/Command (38.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-638726 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-638726 --output=json --user=testUser: (6.039717781s)
--- PASS: TestJSONOutput/stop/Command (6.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-336749 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-336749 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (69.739507ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a48879a9-5841-4d3a-aaa2-bcc9c85ef5d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-336749] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1816db53-58f3-4f82-b1d6-bfcbdd6d4568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22332"}}
	{"specversion":"1.0","id":"8d6f6cbd-ec8f-4b31-b90f-f1e7505218eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d1b0b17a-c44a-488e-9245-2f3a77156eca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig"}}
	{"specversion":"1.0","id":"3e20d84b-274b-4773-815a-008e9d6feac9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube"}}
	{"specversion":"1.0","id":"526a5815-0417-47b5-9774-21130a359d17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6524c8e9-2b40-4687-88a1-48819135bb86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6b38e74d-6ac1-447b-9b0a-3cf93c997d05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-336749" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-336749
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.21s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-816313 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-816313 --network=: (24.118872293s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-816313" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-816313
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-816313: (2.073609617s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.21s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (18.97s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-731509 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-731509 --network=bridge: (17.002332334s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-731509" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-731509
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-731509: (1.945671747s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (18.97s)

                                                
                                    
x
+
TestKicExistingNetwork (19.96s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 20:09:28.349952   14427 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 20:09:28.366515   14427 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 20:09:28.366590   14427 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 20:09:28.366613   14427 cli_runner.go:164] Run: docker network inspect existing-network
W1227 20:09:28.381717   14427 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 20:09:28.381748   14427 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 20:09:28.381760   14427 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 20:09:28.381878   14427 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 20:09:28.397268   14427 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0db0ba8938bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:24:76:f1:9a:26} reservation:<nil>}
I1227 20:09:28.397645   14427 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020082d0}
I1227 20:09:28.397670   14427 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 20:09:28.397708   14427 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 20:09:28.441739   14427 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-760293 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-760293 --network=existing-network: (17.931426848s)
helpers_test.go:176: Cleaning up "existing-network-760293" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-760293
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-760293: (1.909686575s)
I1227 20:09:48.298973   14427 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (19.96s)

                                                
                                    
x
+
TestKicCustomSubnet (20.71s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-633890 --subnet=192.168.60.0/24
E1227 20:10:00.944124   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-633890 --subnet=192.168.60.0/24: (18.631861504s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-633890 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-633890" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-633890
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-633890: (2.055930473s)
--- PASS: TestKicCustomSubnet (20.71s)

                                                
                                    
x
+
TestKicStaticIP (20.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-556440 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-556440 --static-ip=192.168.200.200: (18.344025338s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-556440 ip
helpers_test.go:176: Cleaning up "static-ip-556440" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-556440
E1227 20:10:28.620294   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-556440: (2.066076356s)
--- PASS: TestKicStaticIP (20.55s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (40.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-407263 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-407263 --driver=docker  --container-runtime=crio: (17.40477168s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-409585 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-409585 --driver=docker  --container-runtime=crio: (16.957614447s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-407263
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-409585
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-409585" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-409585
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-409585: (2.273658483s)
helpers_test.go:176: Cleaning up "first-407263" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-407263
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-407263: (2.297571014s)
--- PASS: TestMinikubeProfile (40.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-204533 --memory=3072 --mount-string /tmp/TestMountStartserial1664307323/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-204533 --memory=3072 --mount-string /tmp/TestMountStartserial1664307323/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.523331542s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-204533 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-218669 --memory=3072 --mount-string /tmp/TestMountStartserial1664307323/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-218669 --memory=3072 --mount-string /tmp/TestMountStartserial1664307323/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.563427288s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-218669 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-204533 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-204533 --alsologtostderr -v=5: (1.619129585s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-218669 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-218669
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-218669: (1.240857501s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-218669
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-218669: (6.128563218s)
--- PASS: TestMountStart/serial/RestartStopped (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-218669 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825878 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1227 20:11:41.517238   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-825878 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m1.912188906s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-825878 -- rollout status deployment/busybox: (1.319118623s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-8ks4b -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-kx7gm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-8ks4b -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-kx7gm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-8ks4b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-kx7gm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-8ks4b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-8ks4b -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-kx7gm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-825878 -- exec busybox-769dd8b7dd-kx7gm -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-825878 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-825878 -v=5 --alsologtostderr: (22.711858876s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.32s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-825878 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp testdata/cp-test.txt multinode-825878:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp multinode-825878:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1745559734/001/cp-test_multinode-825878.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp multinode-825878:/home/docker/cp-test.txt multinode-825878-m02:/home/docker/cp-test_multinode-825878_multinode-825878-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m02 "sudo cat /home/docker/cp-test_multinode-825878_multinode-825878-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp multinode-825878:/home/docker/cp-test.txt multinode-825878-m03:/home/docker/cp-test_multinode-825878_multinode-825878-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m03 "sudo cat /home/docker/cp-test_multinode-825878_multinode-825878-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp testdata/cp-test.txt multinode-825878-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp multinode-825878-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1745559734/001/cp-test_multinode-825878-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp multinode-825878-m02:/home/docker/cp-test.txt multinode-825878:/home/docker/cp-test_multinode-825878-m02_multinode-825878.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878 "sudo cat /home/docker/cp-test_multinode-825878-m02_multinode-825878.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp multinode-825878-m02:/home/docker/cp-test.txt multinode-825878-m03:/home/docker/cp-test_multinode-825878-m02_multinode-825878-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m03 "sudo cat /home/docker/cp-test_multinode-825878-m02_multinode-825878-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp testdata/cp-test.txt multinode-825878-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp multinode-825878-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1745559734/001/cp-test_multinode-825878-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp multinode-825878-m03:/home/docker/cp-test.txt multinode-825878:/home/docker/cp-test_multinode-825878-m03_multinode-825878.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878 "sudo cat /home/docker/cp-test_multinode-825878-m03_multinode-825878.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 cp multinode-825878-m03:/home/docker/cp-test.txt multinode-825878-m02:/home/docker/cp-test_multinode-825878-m03_multinode-825878-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 ssh -n multinode-825878-m02 "sudo cat /home/docker/cp-test_multinode-825878-m03_multinode-825878-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-825878 node stop m03: (1.256307758s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-825878 status: exit status 7 (472.246749ms)

                                                
                                                
-- stdout --
	multinode-825878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-825878-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-825878-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-825878 status --alsologtostderr: exit status 7 (476.535645ms)

                                                
                                                
-- stdout --
	multinode-825878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-825878-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-825878-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:13:12.380730  147135 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:13:12.381002  147135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:13:12.381013  147135 out.go:374] Setting ErrFile to fd 2...
	I1227 20:13:12.381020  147135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:13:12.381294  147135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:13:12.381446  147135 out.go:368] Setting JSON to false
	I1227 20:13:12.381468  147135 mustload.go:66] Loading cluster: multinode-825878
	I1227 20:13:12.381590  147135 notify.go:221] Checking for updates...
	I1227 20:13:12.381933  147135 config.go:182] Loaded profile config "multinode-825878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:13:12.381953  147135 status.go:174] checking status of multinode-825878 ...
	I1227 20:13:12.382390  147135 cli_runner.go:164] Run: docker container inspect multinode-825878 --format={{.State.Status}}
	I1227 20:13:12.400868  147135 status.go:371] multinode-825878 host status = "Running" (err=<nil>)
	I1227 20:13:12.400887  147135 host.go:66] Checking if "multinode-825878" exists ...
	I1227 20:13:12.401189  147135 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-825878
	I1227 20:13:12.418745  147135 host.go:66] Checking if "multinode-825878" exists ...
	I1227 20:13:12.419011  147135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:13:12.419072  147135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-825878
	I1227 20:13:12.435617  147135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/multinode-825878/id_rsa Username:docker}
	I1227 20:13:12.521664  147135 ssh_runner.go:195] Run: systemctl --version
	I1227 20:13:12.527695  147135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:13:12.538841  147135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:13:12.591176  147135 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-27 20:13:12.581308856 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:13:12.591615  147135 kubeconfig.go:125] found "multinode-825878" server: "https://192.168.67.2:8443"
	I1227 20:13:12.591641  147135 api_server.go:166] Checking apiserver status ...
	I1227 20:13:12.591673  147135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:12.602855  147135 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup
	I1227 20:13:12.610628  147135 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1233/cgroup
	I1227 20:13:12.617662  147135 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-79cedcbc9d84a7e3e738fc5ae4d385bd7106ab537a01944ecc97133c9c68dfc4.scope/container/cgroup.freeze
	I1227 20:13:12.624700  147135 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1227 20:13:12.629967  147135 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1227 20:13:12.629990  147135 status.go:463] multinode-825878 apiserver status = Running (err=<nil>)
	I1227 20:13:12.630002  147135 status.go:176] multinode-825878 status: &{Name:multinode-825878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:13:12.630025  147135 status.go:174] checking status of multinode-825878-m02 ...
	I1227 20:13:12.630260  147135 cli_runner.go:164] Run: docker container inspect multinode-825878-m02 --format={{.State.Status}}
	I1227 20:13:12.648615  147135 status.go:371] multinode-825878-m02 host status = "Running" (err=<nil>)
	I1227 20:13:12.648634  147135 host.go:66] Checking if "multinode-825878-m02" exists ...
	I1227 20:13:12.648839  147135 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-825878-m02
	I1227 20:13:12.665811  147135 host.go:66] Checking if "multinode-825878-m02" exists ...
	I1227 20:13:12.666066  147135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:13:12.666119  147135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-825878-m02
	I1227 20:13:12.682697  147135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22332-10897/.minikube/machines/multinode-825878-m02/id_rsa Username:docker}
	I1227 20:13:12.769706  147135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:13:12.781360  147135 status.go:176] multinode-825878-m02 status: &{Name:multinode-825878-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:13:12.781392  147135 status.go:174] checking status of multinode-825878-m03 ...
	I1227 20:13:12.781705  147135 cli_runner.go:164] Run: docker container inspect multinode-825878-m03 --format={{.State.Status}}
	I1227 20:13:12.798978  147135 status.go:371] multinode-825878-m03 host status = "Stopped" (err=<nil>)
	I1227 20:13:12.798998  147135 status.go:384] host is not running, skipping remaining checks
	I1227 20:13:12.799004  147135 status.go:176] multinode-825878-m03 status: &{Name:multinode-825878-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-825878 node start m03 -v=5 --alsologtostderr: (6.22723114s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-825878
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-825878
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-825878: (29.401660297s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825878 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-825878 --wait=true -v=5 --alsologtostderr: (51.462026773s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-825878
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-825878 node delete m03: (4.584432955s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 stop
E1227 20:15:00.938381   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-825878 stop: (28.392542644s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-825878 status: exit status 7 (97.08255ms)

                                                
                                                
-- stdout --
	multinode-825878
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-825878-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-825878 status --alsologtostderr: exit status 7 (94.928958ms)

                                                
                                                
-- stdout --
	multinode-825878
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-825878-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:15:14.411262  157033 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:15:14.411363  157033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:14.411375  157033 out.go:374] Setting ErrFile to fd 2...
	I1227 20:15:14.411382  157033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:14.411586  157033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:15:14.411789  157033 out.go:368] Setting JSON to false
	I1227 20:15:14.411816  157033 mustload.go:66] Loading cluster: multinode-825878
	I1227 20:15:14.411966  157033 notify.go:221] Checking for updates...
	I1227 20:15:14.412291  157033 config.go:182] Loaded profile config "multinode-825878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:14.412324  157033 status.go:174] checking status of multinode-825878 ...
	I1227 20:15:14.412872  157033 cli_runner.go:164] Run: docker container inspect multinode-825878 --format={{.State.Status}}
	I1227 20:15:14.433503  157033 status.go:371] multinode-825878 host status = "Stopped" (err=<nil>)
	I1227 20:15:14.433537  157033 status.go:384] host is not running, skipping remaining checks
	I1227 20:15:14.433545  157033 status.go:176] multinode-825878 status: &{Name:multinode-825878 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:15:14.433586  157033 status.go:174] checking status of multinode-825878-m02 ...
	I1227 20:15:14.433944  157033 cli_runner.go:164] Run: docker container inspect multinode-825878-m02 --format={{.State.Status}}
	I1227 20:15:14.451177  157033 status.go:371] multinode-825878-m02 host status = "Stopped" (err=<nil>)
	I1227 20:15:14.451206  157033 status.go:384] host is not running, skipping remaining checks
	I1227 20:15:14.451221  157033 status.go:176] multinode-825878-m02 status: &{Name:multinode-825878-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (27.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825878 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-825878 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (27.179852071s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-825878 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (27.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-825878
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825878-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-825878-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.760921ms)

                                                
                                                
-- stdout --
	* [multinode-825878-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-825878-m02' is duplicated with machine name 'multinode-825878-m02' in profile 'multinode-825878'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-825878-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-825878-m03 --driver=docker  --container-runtime=crio: (20.019273194s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-825878
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-825878: exit status 80 (287.467473ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-825878 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-825878-m03 already exists in multinode-825878-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-825878-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-825878-m03: (2.2715988s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.71s)

                                                
                                    
x
+
TestScheduledStopUnix (94.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-639699 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-639699 --memory=3072 --driver=docker  --container-runtime=crio: (19.003971661s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-639699 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:16:28.090391  166845 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:16:28.090505  166845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:16:28.090515  166845 out.go:374] Setting ErrFile to fd 2...
	I1227 20:16:28.090519  166845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:16:28.090714  166845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:16:28.090958  166845 out.go:368] Setting JSON to false
	I1227 20:16:28.091042  166845 mustload.go:66] Loading cluster: scheduled-stop-639699
	I1227 20:16:28.091324  166845 config.go:182] Loaded profile config "scheduled-stop-639699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:28.091384  166845 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/scheduled-stop-639699/config.json ...
	I1227 20:16:28.091551  166845 mustload.go:66] Loading cluster: scheduled-stop-639699
	I1227 20:16:28.091644  166845 config.go:182] Loaded profile config "scheduled-stop-639699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-639699 -n scheduled-stop-639699
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:16:28.466959  167001 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:16:28.467072  167001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:16:28.467078  167001 out.go:374] Setting ErrFile to fd 2...
	I1227 20:16:28.467084  167001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:16:28.467327  167001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:16:28.467560  167001 out.go:368] Setting JSON to false
	I1227 20:16:28.467728  167001 daemonize_unix.go:73] killing process 166880 as it is an old scheduled stop
	I1227 20:16:28.467830  167001 mustload.go:66] Loading cluster: scheduled-stop-639699
	I1227 20:16:28.468174  167001 config.go:182] Loaded profile config "scheduled-stop-639699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:28.468259  167001 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/scheduled-stop-639699/config.json ...
	I1227 20:16:28.468429  167001 mustload.go:66] Loading cluster: scheduled-stop-639699
	I1227 20:16:28.468517  167001 config.go:182] Loaded profile config "scheduled-stop-639699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 20:16:28.472622   14427 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/scheduled-stop-639699/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-639699 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1227 20:16:41.517570   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-639699 -n scheduled-stop-639699
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-639699
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-639699 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:16:54.354342  167710 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:16:54.354477  167710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:16:54.354490  167710 out.go:374] Setting ErrFile to fd 2...
	I1227 20:16:54.354497  167710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:16:54.354721  167710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:16:54.354966  167710 out.go:368] Setting JSON to false
	I1227 20:16:54.355038  167710 mustload.go:66] Loading cluster: scheduled-stop-639699
	I1227 20:16:54.355342  167710 config.go:182] Loaded profile config "scheduled-stop-639699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:54.355411  167710 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/scheduled-stop-639699/config.json ...
	I1227 20:16:54.355596  167710 mustload.go:66] Loading cluster: scheduled-stop-639699
	I1227 20:16:54.355690  167710 config.go:182] Loaded profile config "scheduled-stop-639699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-639699
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-639699: exit status 7 (77.712561ms)

                                                
                                                
-- stdout --
	scheduled-stop-639699
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-639699 -n scheduled-stop-639699
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-639699 -n scheduled-stop-639699: exit status 7 (75.358057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-639699" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-639699
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-639699: (4.280673541s)
--- PASS: TestScheduledStopUnix (94.77s)

                                                
                                    
x
+
TestInsufficientStorage (8.44s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-352558 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-352558 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.052669223s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d595c5c-ba6a-487c-a630-30ea3b9d0f82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-352558] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c7c29cc-5696-4237-a3a8-86c55a896298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22332"}}
	{"specversion":"1.0","id":"2919fbc2-f445-4a10-910b-28647ae71be2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bb6c45dd-54ee-47a4-bb94-56b42f77d94d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig"}}
	{"specversion":"1.0","id":"023b7a91-f960-4891-a3ef-fc55ddd06e60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube"}}
	{"specversion":"1.0","id":"f79bb261-8824-4bf8-8969-791d3fb8cbf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f95b830e-dfd9-478a-9fa7-c067eb1acd9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"efd0f127-b30c-4c8f-b768-c4e91c1a1117","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"70902f1f-3aec-48ef-b5f3-c1ecd4bf1514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b2fd3f6c-db51-4d30-ae68-6701e49e7934","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4eaad206-efc2-4715-b5f1-72ddf056cc42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"88306ee2-49ff-4b00-a9d4-bb1d05b0de68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-352558\" primary control-plane node in \"insufficient-storage-352558\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc1c9a78-60f0-46d9-b578-ad0344639660","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cfa87b83-5c23-4b6a-b5ef-0895ab772f1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"df727e01-460b-492b-b2de-458d047991ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-352558 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-352558 --output=json --layout=cluster: exit status 7 (270.895636ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-352558","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-352558","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:17:50.110596  170634 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-352558" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-352558 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-352558 --output=json --layout=cluster: exit status 7 (273.73411ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-352558","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-352558","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:17:50.384101  170745 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-352558" does not appear in /home/jenkins/minikube-integration/22332-10897/kubeconfig
	E1227 20:17:50.394848  170745 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/insufficient-storage-352558/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-352558" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-352558
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-352558: (1.841486529s)
--- PASS: TestInsufficientStorage (8.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (289.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3066386761 start -p running-upgrade-593051 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1227 20:20:00.936066   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3066386761 start -p running-upgrade-593051 --memory=3072 --vm-driver=docker  --container-runtime=crio: (19.206138342s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-593051 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-593051 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.918720572s)
helpers_test.go:176: Cleaning up "running-upgrade-593051" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-593051
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-593051: (2.454961141s)
--- PASS: TestRunningBinaryUpgrade (289.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (328.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.617771173s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-498227 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-498227 --alsologtostderr: (1.889066098s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-498227 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-498227 status --format={{.Host}}: exit status 7 (90.631837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m56.242682746s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-498227 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (79.056035ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-498227] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-498227
	    minikube start -p kubernetes-upgrade-498227 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4982272 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-498227 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-498227 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4.497346121s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-498227" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-498227
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-498227: (2.631585988s)
--- PASS: TestKubernetesUpgrade (328.11s)

                                                
                                    
x
+
TestMissingContainerUpgrade (70.45s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.859179964 start -p missing-upgrade-167772 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.859179964 start -p missing-upgrade-167772 --memory=3072 --driver=docker  --container-runtime=crio: (20.463665702s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-167772
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-167772: (4.126321726s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-167772
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-167772 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-167772 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.419202766s)
helpers_test.go:176: Cleaning up "missing-upgrade-167772" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-167772
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-167772: (3.837981439s)
--- PASS: TestMissingContainerUpgrade (70.45s)

                                                
                                    
x
+
TestPause/serial/Start (55.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-260501 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-260501 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.010462307s)
--- PASS: TestPause/serial/Start (55.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (300.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1062625949 start -p stopped-upgrade-379247 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1227 20:18:04.561574   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1062625949 start -p stopped-upgrade-379247 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.153331151s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1062625949 -p stopped-upgrade-379247 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1062625949 -p stopped-upgrade-379247 stop: (3.593838236s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-379247 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-379247 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m17.457445542s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (300.21s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-260501 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-260501 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.452726012s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501383 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-501383 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (80.474947ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-501383] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (19.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501383 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-501383 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.011451915s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-501383 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (19.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501383 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-501383 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.878110876s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-501383 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-501383 status -o json: exit status 2 (319.787369ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-501383","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-501383
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-501383: (2.107796857s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-436655 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-436655 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (197.366481ms)

                                                
                                                
-- stdout --
	* [false-436655] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:19:44.258396  203160 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:19:44.258863  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:19:44.258878  203160 out.go:374] Setting ErrFile to fd 2...
	I1227 20:19:44.258884  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:19:44.259360  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-10897/.minikube/bin
	I1227 20:19:44.260107  203160 out.go:368] Setting JSON to false
	I1227 20:19:44.261577  203160 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3733,"bootTime":1766863051,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1227 20:19:44.261671  203160 start.go:143] virtualization: kvm guest
	I1227 20:19:44.264115  203160 out.go:179] * [false-436655] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1227 20:19:44.265790  203160 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:19:44.265790  203160 notify.go:221] Checking for updates...
	I1227 20:19:44.269837  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:19:44.272680  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-10897/kubeconfig
	I1227 20:19:44.273909  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-10897/.minikube
	I1227 20:19:44.275466  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1227 20:19:44.277322  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:19:44.279203  203160 config.go:182] Loaded profile config "NoKubernetes-501383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1227 20:19:44.279351  203160 config.go:182] Loaded profile config "kubernetes-upgrade-498227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:19:44.279482  203160 config.go:182] Loaded profile config "stopped-upgrade-379247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 20:19:44.279615  203160 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:19:44.310863  203160 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1227 20:19:44.310981  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:19:44.371495  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:69 SystemTime:2025-12-27 20:19:44.361697151 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1227 20:19:44.371655  203160 docker.go:319] overlay module found
	I1227 20:19:44.373548  203160 out.go:179] * Using the docker driver based on user configuration
	I1227 20:19:44.374768  203160 start.go:309] selected driver: docker
	I1227 20:19:44.374782  203160 start.go:928] validating driver "docker" against <nil>
	I1227 20:19:44.374793  203160 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:19:44.376607  203160 out.go:203] 
	W1227 20:19:44.377813  203160 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1227 20:19:44.378895  203160 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-436655 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-436655" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:19:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-498227
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:18:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-379247
contexts:
- context:
cluster: kubernetes-upgrade-498227
user: kubernetes-upgrade-498227
name: kubernetes-upgrade-498227
- context:
cluster: stopped-upgrade-379247
user: stopped-upgrade-379247
name: stopped-upgrade-379247
current-context: kubernetes-upgrade-498227
kind: Config
users:
- name: kubernetes-upgrade-498227
user:
client-certificate: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/kubernetes-upgrade-498227/client.crt
client-key: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/kubernetes-upgrade-498227/client.key
- name: stopped-upgrade-379247
user:
client-certificate: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/stopped-upgrade-379247/client.crt
client-key: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/stopped-upgrade-379247/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-436655

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-436655"

                                                
                                                
----------------------- debugLogs end: false-436655 [took: 3.726597257s] --------------------------------
helpers_test.go:176: Cleaning up "false-436655" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-436655
--- PASS: TestNetworkPlugins/group/false (4.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501383 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-501383 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.272104839s)
--- PASS: TestNoKubernetes/serial/Start (4.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22332-10897/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-501383 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-501383 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.125554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (74.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (27.810261329s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (46.724553225s)
--- PASS: TestNoKubernetes/serial/ProfileList (74.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-501383
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-501383: (1.243281s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501383 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-501383 --driver=docker  --container-runtime=crio: (8.233458404s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-501383 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-501383 "sudo systemctl is-active --quiet service kubelet": exit status 1 (261.9544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (53.55s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-514721 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1227 20:21:23.981173   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:21:41.516902   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/addons-416077/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-514721 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (44.64662995s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-514721 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-514721
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-514721: (8.016759294s)
--- PASS: TestPreload/Start-NoPreload-PullImage (53.55s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (50.19s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-514721 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-514721 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.784969281s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-514721 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (50.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-379247
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (37.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (37.626974133s)
--- PASS: TestNetworkPlugins/group/auto/Start (37.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.620808657s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-436655 "pgrep -a kubelet"
I1227 20:24:20.226429   14427 config.go:182] Loaded profile config "auto-436655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-436655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mwvbh" [77259fbc-8267-457e-9c6b-ccbd2f9cb23d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mwvbh" [77259fbc-8267-457e-9c6b-ccbd2f9cb23d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003515367s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-436655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.404010401s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (40.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (40.707147371s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (40.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-7sknv" [8333dd14-554b-48cf-ad9f-e8a197855375] Running
E1227 20:25:00.936764   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/functional-487501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00378385s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-436655 "pgrep -a kubelet"
I1227 20:25:06.392651   14427 config.go:182] Loaded profile config "kindnet-436655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-436655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-qvgvc" [149b96a0-696f-4a0c-9e90-938d0a223704] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-qvgvc" [149b96a0-696f-4a0c-9e90-938d0a223704] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004921088s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-436655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-436655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-436655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-bxd47" [054f6e0c-e484-4a47-b2be-77f2b70b3355] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-bxd47" [054f6e0c-e484-4a47-b2be-77f2b70b3355] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004140073s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-q6qh9" [0b2eee43-7d94-4632-9a76-5b39f29b8cd8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.089825707s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (39.277426848s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-436655 "pgrep -a kubelet"
I1227 20:25:39.182314   14427 config.go:182] Loaded profile config "calico-436655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-436655 replace --force -f testdata/netcat-deployment.yaml
I1227 20:25:39.690264   14427 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1227 20:25:39.696765   14427 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-gzpck" [e8c6a1dd-bd1e-46ee-8a75-1afe1647e8e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-gzpck" [e8c6a1dd-bd1e-46ee-8a75-1afe1647e8e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004404409s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-436655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-436655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.286352401s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-436655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m0.894647336s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-436655 "pgrep -a kubelet"
I1227 20:26:14.774078   14427 config.go:182] Loaded profile config "enable-default-cni-436655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-436655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-wgtsn" [fbe80db0-37c9-4c6e-94bd-7ebb7cafd32f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-wgtsn" [fbe80db0-37c9-4c6e-94bd-7ebb7cafd32f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004514298s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-436655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (48.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.83396501s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (48.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (46.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (46.602209354s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (46.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-7z954" [212672e5-0e2f-465e-8ff1-08a6b61fb26e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003841296s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-436655 "pgrep -a kubelet"
I1227 20:26:53.768362   14427 config.go:182] Loaded profile config "flannel-436655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-436655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-r9q4j" [c60a3c8b-35d9-46d1-a5f0-02e6c9ccd0af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-r9q4j" [c60a3c8b-35d9-46d1-a5f0-02e6c9ccd0af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003366939s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-436655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-436655 "pgrep -a kubelet"
I1227 20:27:11.026074   14427 config.go:182] Loaded profile config "bridge-436655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-436655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-vq4hh" [59661199-430a-43bc-8d97-82818c10ae3a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-vq4hh" [59661199-430a-43bc-8d97-82818c10ae3a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00430555s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-762177 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [361ecc55-6296-4f19-ba72-adde33ca680f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [361ecc55-6296-4f19-ba72-adde33ca680f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003229754s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-762177 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-436655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-436655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-762177 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-762177 --alsologtostderr -v=3: (16.465229681s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (37.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (37.157959266s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (37.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-014435 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [777b0dc8-69fb-44e6-85ed-eb73c72cfc69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [777b0dc8-69fb-44e6-85ed-eb73c72cfc69] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003325094s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-014435 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-762177 -n old-k8s-version-762177
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-762177 -n old-k8s-version-762177: exit status 7 (92.709654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-762177 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-762177 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.754020421s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-762177 -n old-k8s-version-762177
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (40.014114406s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-014435 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-014435 --alsologtostderr -v=3: (16.271761134s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014435 -n no-preload-014435
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014435 -n no-preload-014435: exit status 7 (104.227938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-014435 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-014435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (51.149004962s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014435 -n no-preload-014435
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-820583 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [cb192a66-d82e-4965-a6f8-046b0b6618d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [cb192a66-d82e-4965-a6f8-046b0b6618d0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004748457s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-820583 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-820583 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-820583 --alsologtostderr -v=3: (16.624412213s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (6.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-954154 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d25d862a-9040-4a22-935d-4e6d3eac79d1] Pending
helpers_test.go:353: "busybox" [d25d862a-9040-4a22-935d-4e6d3eac79d1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 6.003422394s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-954154 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (6.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-bfhwt" [32c3377f-ae5d-4e77-ae87-bbf26d43e921] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003454445s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-820583 -n embed-certs-820583
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-820583 -n embed-certs-820583: exit status 7 (94.274083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-820583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-820583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (45.308105552s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-820583 -n embed-certs-820583
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-954154 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-954154 --alsologtostderr -v=3: (18.390961048s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-bfhwt" [32c3377f-ae5d-4e77-ae87-bbf26d43e921] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003802478s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-762177 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-762177 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154: exit status 7 (82.993368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-954154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-954154 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (52.02491757s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-954154 -n default-k8s-diff-port-954154
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (21.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (21.422859467s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (21.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-v6b7x" [839a59c8-9971-48f1-a068-698f15eb006b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003823726s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-v6b7x" [839a59c8-9971-48f1-a068-698f15eb006b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004731025s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-014435 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-014435 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (3.99s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-588477 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-588477 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.792773788s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-588477" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-588477
--- PASS: TestPreload/PreloadSrc/gcs (3.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-307728 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-307728 --alsologtostderr -v=3: (10.259625044s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-2hqqv" [db51784c-7bd0-4825-a1ed-894a31ecf548] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003775625s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (5.61s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-805734 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-805734 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (4.853098635s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-805734" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-805734
E1227 20:29:21.050441   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPreload/PreloadSrc/github (5.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-2hqqv" [db51784c-7bd0-4825-a1ed-894a31ecf548] Running
E1227 20:29:20.411464   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:29:20.416738   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:29:20.427005   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:29:20.447327   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:29:20.487611   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:29:20.568792   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:29:20.729339   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003521707s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-820583 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.78s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-275955 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
E1227 20:29:21.691130   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-275955" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-275955
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-307728 -n newest-cni-307728
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-307728 -n newest-cni-307728: exit status 7 (77.838609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-307728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 20:29:22.971529   14427 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/auto-436655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-307728 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (9.934184942s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-307728 -n newest-cni-307728
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-820583 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-307728 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-nqh72" [48f88a06-6ef5-46f5-8c61-9b2e57b5fe4c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002768265s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-nqh72" [48f88a06-6ef5-46f5-8c61-9b2e57b5fe4c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003573443s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-954154 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-954154 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-436655 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-436655" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:19:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-501383
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:18:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-379247
contexts:
- context:
cluster: NoKubernetes-501383
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:19:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-501383
name: NoKubernetes-501383
- context:
cluster: stopped-upgrade-379247
user: stopped-upgrade-379247
name: stopped-upgrade-379247
current-context: ""
kind: Config
users:
- name: NoKubernetes-501383
user:
client-certificate: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/NoKubernetes-501383/client.crt
client-key: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/NoKubernetes-501383/client.key
- name: stopped-upgrade-379247
user:
client-certificate: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/stopped-upgrade-379247/client.crt
client-key: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/stopped-upgrade-379247/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-436655

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-436655"

                                                
                                                
----------------------- debugLogs end: kubenet-436655 [took: 3.910785007s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-436655" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-436655
--- SKIP: TestNetworkPlugins/group/kubenet (4.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-436655 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-436655" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:19:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-498227
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22332-10897/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 20:18:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-379247
contexts:
- context:
cluster: kubernetes-upgrade-498227
user: kubernetes-upgrade-498227
name: kubernetes-upgrade-498227
- context:
cluster: stopped-upgrade-379247
user: stopped-upgrade-379247
name: stopped-upgrade-379247
current-context: kubernetes-upgrade-498227
kind: Config
users:
- name: kubernetes-upgrade-498227
user:
client-certificate: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/kubernetes-upgrade-498227/client.crt
client-key: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/kubernetes-upgrade-498227/client.key
- name: stopped-upgrade-379247
user:
client-certificate: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/stopped-upgrade-379247/client.crt
client-key: /home/jenkins/minikube-integration/22332-10897/.minikube/profiles/stopped-upgrade-379247/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-436655

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-436655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436655"

                                                
                                                
----------------------- debugLogs end: cilium-436655 [took: 3.565200347s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-436655" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-436655
--- SKIP: TestNetworkPlugins/group/cilium (3.72s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-541137" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-541137
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard